00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2467 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3732 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.127 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.128 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.181 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.226 Using shallow fetch with depth 1 00:00:00.226 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.226 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.286 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.286 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.178 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.190 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.201 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.201 > git config core.sparsecheckout # timeout=10 00:00:07.213 > git read-tree -mu HEAD # timeout=10 00:00:07.229 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.247 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.247 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.382 [Pipeline] Start of Pipeline 00:00:07.396 [Pipeline] library 00:00:07.397 Loading library shm_lib@master 00:00:07.397 Library shm_lib@master is cached. Copying from home. 00:00:07.412 [Pipeline] node 00:00:07.444 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.446 [Pipeline] { 00:00:07.457 [Pipeline] catchError 00:00:07.458 [Pipeline] { 00:00:07.468 [Pipeline] wrap 00:00:07.475 [Pipeline] { 00:00:07.481 [Pipeline] stage 00:00:07.482 [Pipeline] { (Prologue) 00:00:07.493 [Pipeline] echo 00:00:07.494 Node: VM-host-SM38 00:00:07.498 [Pipeline] cleanWs 00:00:07.507 [WS-CLEANUP] Deleting project workspace... 00:00:07.507 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.513 [WS-CLEANUP] done 00:00:07.736 [Pipeline] setCustomBuildProperty 00:00:07.803 [Pipeline] httpRequest 00:00:08.306 [Pipeline] echo 00:00:08.307 Sorcerer 10.211.164.20 is alive 00:00:08.317 [Pipeline] retry 00:00:08.319 [Pipeline] { 00:00:08.332 [Pipeline] httpRequest 00:00:08.337 HttpMethod: GET 00:00:08.337 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.338 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.340 Response Code: HTTP/1.1 200 OK 00:00:08.340 Success: Status code 200 is in the accepted range: 200,404 00:00:08.341 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.308 [Pipeline] } 00:00:09.325 [Pipeline] // retry 00:00:09.331 [Pipeline] sh 00:00:09.643 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.660 [Pipeline] httpRequest 00:00:10.061 [Pipeline] echo 00:00:10.063 Sorcerer 10.211.164.20 is alive 00:00:10.072 [Pipeline] retry 00:00:10.074 [Pipeline] { 00:00:10.088 [Pipeline] httpRequest 00:00:10.093 HttpMethod: GET 00:00:10.093 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.094 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.106 Response Code: HTTP/1.1 200 OK 00:00:10.106 Success: Status code 200 is in the accepted range: 200,404 00:00:10.107 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:56.793 [Pipeline] } 00:00:56.811 [Pipeline] // retry 00:00:56.819 [Pipeline] sh 00:00:57.109 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:59.675 [Pipeline] sh 00:00:59.962 + git -C spdk log --oneline -n5 00:00:59.962 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:59.962 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:59.962 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:59.962 66289a6db build: use VERSION file for storing version 00:00:59.962 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:59.983 [Pipeline] withCredentials 00:00:59.996 > git --version # timeout=10 00:01:00.010 > git --version # 'git version 2.39.2' 00:01:00.031 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:00.033 [Pipeline] { 00:01:00.043 [Pipeline] retry 00:01:00.045 [Pipeline] { 00:01:00.061 [Pipeline] sh 00:01:00.347 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:00.362 [Pipeline] } 00:01:00.380 [Pipeline] // retry 00:01:00.385 [Pipeline] } 00:01:00.402 [Pipeline] // withCredentials 00:01:00.412 [Pipeline] httpRequest 00:01:00.817 [Pipeline] echo 00:01:00.819 Sorcerer 10.211.164.20 is alive 00:01:00.829 [Pipeline] retry 00:01:00.831 [Pipeline] { 00:01:00.846 [Pipeline] httpRequest 00:01:00.852 HttpMethod: GET 00:01:00.852 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:00.853 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:00.860 Response Code: HTTP/1.1 200 OK 00:01:00.860 Success: Status code 200 is in the accepted range: 200,404 00:01:00.861 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.922 [Pipeline] } 00:01:31.939 [Pipeline] // retry 00:01:31.948 [Pipeline] sh 00:01:32.236 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:33.633 [Pipeline] sh 00:01:33.916 + git -C dpdk log --oneline -n5 00:01:33.916 caf0f5d395 version: 22.11.4 00:01:33.916 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:33.916 dc9c799c7d vhost: fix missing spinlock unlock 00:01:33.916 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:33.916 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:33.934 [Pipeline] writeFile 00:01:33.950 [Pipeline] sh 00:01:34.237 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:34.251 [Pipeline] sh 00:01:34.536 + cat autorun-spdk.conf 00:01:34.536 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.536 SPDK_TEST_NVME=1 00:01:34.536 SPDK_TEST_FTL=1 00:01:34.536 SPDK_TEST_ISAL=1 00:01:34.536 SPDK_RUN_ASAN=1 00:01:34.536 SPDK_RUN_UBSAN=1 00:01:34.536 SPDK_TEST_XNVME=1 00:01:34.536 SPDK_TEST_NVME_FDP=1 00:01:34.536 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:34.536 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:34.536 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.545 RUN_NIGHTLY=1 00:01:34.547 [Pipeline] } 00:01:34.561 [Pipeline] // stage 00:01:34.575 [Pipeline] stage 00:01:34.577 [Pipeline] { (Run VM) 00:01:34.590 [Pipeline] sh 00:01:34.941 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:34.941 + echo 'Start stage prepare_nvme.sh' 00:01:34.941 Start stage prepare_nvme.sh 00:01:34.941 + [[ -n 1 ]] 00:01:34.941 + disk_prefix=ex1 00:01:34.941 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:34.941 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:34.941 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:34.941 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.941 ++ SPDK_TEST_NVME=1 00:01:34.941 ++ SPDK_TEST_FTL=1 00:01:34.941 ++ SPDK_TEST_ISAL=1 00:01:34.941 ++ SPDK_RUN_ASAN=1 00:01:34.941 ++ SPDK_RUN_UBSAN=1 00:01:34.941 ++ SPDK_TEST_XNVME=1 00:01:34.941 ++ SPDK_TEST_NVME_FDP=1 00:01:34.941 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:34.941 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:34.941 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.941 ++ RUN_NIGHTLY=1 00:01:34.941 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:34.941 + nvme_files=() 00:01:34.941 + declare -A nvme_files 00:01:34.941 + backend_dir=/var/lib/libvirt/images/backends 00:01:34.941 + nvme_files['nvme.img']=5G 00:01:34.941 + nvme_files['nvme-cmb.img']=5G 00:01:34.941 + nvme_files['nvme-multi0.img']=4G 00:01:34.941 + nvme_files['nvme-multi1.img']=4G 00:01:34.941 + nvme_files['nvme-multi2.img']=4G 00:01:34.941 + nvme_files['nvme-openstack.img']=8G 00:01:34.941 + nvme_files['nvme-zns.img']=5G 00:01:34.941 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:34.941 + (( SPDK_TEST_FTL == 1 )) 00:01:34.941 + nvme_files["nvme-ftl.img"]=6G 00:01:34.941 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:34.941 + nvme_files["nvme-fdp.img"]=1G 00:01:34.941 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:34.941 + for nvme in "${!nvme_files[@]}" 00:01:34.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:34.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.941 + for nvme in "${!nvme_files[@]}" 00:01:34.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:35.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:35.884 + for nvme in "${!nvme_files[@]}" 00:01:35.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:35.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.884 + for nvme in "${!nvme_files[@]}" 00:01:35.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:35.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:35.884 + for nvme in "${!nvme_files[@]}" 00:01:35.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:35.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.884 + for nvme in "${!nvme_files[@]}" 00:01:35.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:35.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.884 + for nvme in "${!nvme_files[@]}" 00:01:35.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:35.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.884 + for nvme in "${!nvme_files[@]}" 00:01:35.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:36.145 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:36.145 + for nvme in "${!nvme_files[@]}" 00:01:36.145 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:36.145 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.145 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:36.145 + echo 'End stage prepare_nvme.sh' 00:01:36.145 End stage prepare_nvme.sh 00:01:36.156 [Pipeline] sh 00:01:36.443 + DISTRO=fedora39 00:01:36.443 + CPUS=10 00:01:36.443 + RAM=12288 00:01:36.443 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:36.443 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:36.443 00:01:36.443 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:36.443 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:36.443 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:36.443 HELP=0 00:01:36.443 DRY_RUN=0 00:01:36.443 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:36.443 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:36.443 NVME_AUTO_CREATE=0 00:01:36.443 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:36.443 NVME_CMB=,,,, 00:01:36.443 NVME_PMR=,,,, 00:01:36.443 NVME_ZNS=,,,, 00:01:36.443 NVME_MS=true,,,, 00:01:36.443 NVME_FDP=,,,on, 00:01:36.443 SPDK_VAGRANT_DISTRO=fedora39 00:01:36.443 SPDK_VAGRANT_VMCPU=10 00:01:36.443 SPDK_VAGRANT_VMRAM=12288 00:01:36.443 SPDK_VAGRANT_PROVIDER=libvirt 00:01:36.443 SPDK_VAGRANT_HTTP_PROXY= 00:01:36.443 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:36.443 SPDK_OPENSTACK_NETWORK=0 00:01:36.443 VAGRANT_PACKAGE_BOX=0 00:01:36.443 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:36.443 FORCE_DISTRO=true 00:01:36.443 VAGRANT_BOX_VERSION= 00:01:36.443 EXTRA_VAGRANTFILES= 00:01:36.443 NIC_MODEL=e1000 00:01:36.443 00:01:36.443 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:36.443 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:38.993 Bringing machine 'default' up with 'libvirt' provider... 00:01:39.254 ==> default: Creating image (snapshot of base box volume). 00:01:39.517 ==> default: Creating domain with the following settings... 00:01:39.517 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734383248_c9d3c172a34d90278bd9 00:01:39.517 ==> default: -- Domain type: kvm 00:01:39.517 ==> default: -- Cpus: 10 00:01:39.517 ==> default: -- Feature: acpi 00:01:39.517 ==> default: -- Feature: apic 00:01:39.517 ==> default: -- Feature: pae 00:01:39.517 ==> default: -- Memory: 12288M 00:01:39.517 ==> default: -- Memory Backing: hugepages: 00:01:39.517 ==> default: -- Management MAC: 00:01:39.517 ==> default: -- Loader: 00:01:39.517 ==> default: -- Nvram: 00:01:39.517 ==> default: -- Base box: spdk/fedora39 00:01:39.517 ==> default: -- Storage pool: default 00:01:39.517 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734383248_c9d3c172a34d90278bd9.img (20G) 00:01:39.517 ==> default: -- Volume Cache: default 00:01:39.517 ==> default: -- Kernel: 00:01:39.517 ==> default: -- Initrd: 00:01:39.517 ==> default: -- Graphics Type: vnc 00:01:39.517 ==> default: -- Graphics Port: -1 00:01:39.517 ==> default: -- Graphics IP: 127.0.0.1 00:01:39.517 ==> default: -- Graphics Password: Not defined 00:01:39.517 ==> default: -- Video Type: cirrus 00:01:39.517 ==> default: -- Video VRAM: 9216 00:01:39.517 ==> default: -- Sound Type: 00:01:39.517 ==> default: -- Keymap: en-us 00:01:39.517 ==> default: -- TPM Path: 00:01:39.517 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:39.517 ==> default: -- Command line args: 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:39.517 ==> default: -> value=-drive, 00:01:39.517 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:39.517 ==> default: -> value=-drive, 00:01:39.517 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:39.517 ==> default: -> value=-drive, 00:01:39.517 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.517 ==> default: -> value=-drive, 00:01:39.517 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.517 ==> default: -> value=-drive, 00:01:39.517 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:39.517 ==> default: -> value=-drive, 00:01:39.517 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:39.517 ==> default: -> value=-device, 00:01:39.517 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.777 ==> default: Creating shared folders metadata... 00:01:39.777 ==> default: Starting domain. 00:01:42.322 ==> default: Waiting for domain to get an IP address... 00:02:00.454 ==> default: Waiting for SSH to become available... 00:02:00.454 ==> default: Configuring and enabling network interfaces... 00:02:03.761 default: SSH address: 192.168.121.36:22 00:02:03.761 default: SSH username: vagrant 00:02:03.761 default: SSH auth method: private key 00:02:05.731 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:13.877 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:19.164 ==> default: Mounting SSHFS shared folder... 00:02:20.550 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:20.550 ==> default: Checking Mount.. 00:02:21.494 ==> default: Folder Successfully Mounted! 00:02:21.494 00:02:21.494 SUCCESS! 00:02:21.494 00:02:21.494 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:21.494 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:21.494 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:21.494 00:02:21.504 [Pipeline] } 00:02:21.520 [Pipeline] // stage 00:02:21.529 [Pipeline] dir 00:02:21.530 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:21.531 [Pipeline] { 00:02:21.544 [Pipeline] catchError 00:02:21.545 [Pipeline] { 00:02:21.558 [Pipeline] sh 00:02:21.842 + vagrant ssh-config --host vagrant 00:02:21.842 + sed -ne '/^Host/,$p' 00:02:21.842 + tee ssh_conf 00:02:24.386 Host vagrant 00:02:24.386 HostName 192.168.121.36 00:02:24.386 User vagrant 00:02:24.386 Port 22 00:02:24.386 UserKnownHostsFile /dev/null 00:02:24.386 StrictHostKeyChecking no 00:02:24.386 PasswordAuthentication no 00:02:24.386 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:24.386 IdentitiesOnly yes 00:02:24.386 LogLevel FATAL 00:02:24.386 ForwardAgent yes 00:02:24.386 ForwardX11 yes 00:02:24.386 00:02:24.400 [Pipeline] withEnv 00:02:24.401 [Pipeline] { 00:02:24.414 [Pipeline] sh 00:02:24.696 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:24.696 source /etc/os-release 00:02:24.696 [[ -e /image.version ]] && img=$(< /image.version) 00:02:24.696 # Minimal, systemd-like check. 00:02:24.696 if [[ -e /.dockerenv ]]; then 00:02:24.696 # Clear garbage from the node'\''s name: 00:02:24.696 # agt-er_autotest_547-896 -> autotest_547-896 00:02:24.696 # $HOSTNAME is the actual container id 00:02:24.696 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:24.696 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:24.696 # We can assume this is a mount from a host where container is running, 00:02:24.696 # so fetch its hostname to easily identify the target swarm worker. 00:02:24.696 container="$(< /etc/hostname) ($agent)" 00:02:24.696 else 00:02:24.696 # Fallback 00:02:24.696 container=$agent 00:02:24.696 fi 00:02:24.696 fi 00:02:24.696 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:24.696 ' 00:02:24.968 [Pipeline] } 00:02:24.983 [Pipeline] // withEnv 00:02:24.990 [Pipeline] setCustomBuildProperty 00:02:25.003 [Pipeline] stage 00:02:25.004 [Pipeline] { (Tests) 00:02:25.019 [Pipeline] sh 00:02:25.300 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:25.589 [Pipeline] sh 00:02:25.887 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.159 [Pipeline] timeout 00:02:26.159 Timeout set to expire in 50 min 00:02:26.161 [Pipeline] { 00:02:26.174 [Pipeline] sh 00:02:26.455 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:27.025 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:27.038 [Pipeline] sh 00:02:27.321 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:27.599 [Pipeline] sh 00:02:27.884 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.163 [Pipeline] sh 00:02:28.446 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:28.706 ++ readlink -f spdk_repo 00:02:28.706 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.706 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.706 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.706 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.706 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.706 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.706 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.706 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:28.706 + cd /home/vagrant/spdk_repo 00:02:28.706 + source /etc/os-release 00:02:28.706 ++ NAME='Fedora Linux' 00:02:28.706 ++ VERSION='39 (Cloud Edition)' 00:02:28.706 ++ ID=fedora 00:02:28.706 ++ VERSION_ID=39 00:02:28.706 ++ VERSION_CODENAME= 00:02:28.706 ++ PLATFORM_ID=platform:f39 00:02:28.706 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:28.706 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.706 ++ LOGO=fedora-logo-icon 00:02:28.706 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:28.706 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.706 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:28.706 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.706 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.706 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.706 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:28.706 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.706 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:28.706 ++ SUPPORT_END=2024-11-12 00:02:28.706 ++ VARIANT='Cloud Edition' 00:02:28.706 ++ VARIANT_ID=cloud 00:02:28.706 + uname -a 00:02:28.706 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:28.706 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:28.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:29.228 Hugepages 00:02:29.228 node hugesize free / total 00:02:29.228 node0 1048576kB 0 / 0 00:02:29.228 node0 2048kB 0 / 0 00:02:29.228 00:02:29.228 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.228 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:29.228 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:29.488 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:29.488 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:29.488 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:29.488 + rm -f /tmp/spdk-ld-path 00:02:29.488 + source autorun-spdk.conf 00:02:29.488 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.488 ++ SPDK_TEST_NVME=1 00:02:29.488 ++ SPDK_TEST_FTL=1 00:02:29.488 ++ SPDK_TEST_ISAL=1 00:02:29.488 ++ SPDK_RUN_ASAN=1 00:02:29.488 ++ SPDK_RUN_UBSAN=1 00:02:29.488 ++ SPDK_TEST_XNVME=1 00:02:29.488 ++ SPDK_TEST_NVME_FDP=1 00:02:29.488 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:29.488 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:29.489 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.489 ++ RUN_NIGHTLY=1 00:02:29.489 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:29.489 + [[ -n '' ]] 00:02:29.489 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:29.489 + for M in /var/spdk/build-*-manifest.txt 00:02:29.489 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:29.489 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.489 + for M in /var/spdk/build-*-manifest.txt 00:02:29.489 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:29.489 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.489 + for M in /var/spdk/build-*-manifest.txt 00:02:29.489 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:29.489 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.489 ++ uname 00:02:29.489 + [[ Linux == \L\i\n\u\x ]] 00:02:29.489 + sudo dmesg -T 00:02:29.489 + sudo dmesg --clear 00:02:29.489 + dmesg_pid=5762 00:02:29.489 + [[ Fedora Linux == FreeBSD ]] 00:02:29.489 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.489 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.489 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.489 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.489 + sudo dmesg -Tw 00:02:29.489 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.489 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.489 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.489 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.489 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.489 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.489 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.489 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.489 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.489 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.489 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.749 21:08:19 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:29.749 21:08:19 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.749 21:08:19 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:29.749 21:08:19 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:29.749 21:08:19 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.749 21:08:19 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:29.749 21:08:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.749 21:08:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:29.749 21:08:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.749 21:08:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.749 21:08:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.749 21:08:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.750 21:08:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.750 21:08:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.750 21:08:19 -- paths/export.sh@5 -- $ export PATH 00:02:29.750 21:08:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.750 21:08:19 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.750 21:08:19 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:29.750 21:08:19 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734383299.XXXXXX 00:02:29.750 21:08:19 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734383299.AQyqCO 00:02:29.750 21:08:19 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:29.750 21:08:19 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:29.750 21:08:19 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:29.750 21:08:19 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:29.750 21:08:19 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.750 21:08:19 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.750 21:08:19 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:29.750 21:08:19 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:29.750 21:08:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.750 21:08:19 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:29.750 21:08:19 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:29.750 21:08:19 -- pm/common@17 -- $ local monitor 00:02:29.750 21:08:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.750 21:08:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.750 21:08:19 -- pm/common@21 -- $ date +%s 00:02:29.750 21:08:19 -- pm/common@25 -- $ sleep 1 00:02:29.750 21:08:19 -- pm/common@21 -- $ date +%s 00:02:29.750 21:08:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734383299 00:02:29.750 21:08:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734383299 00:02:29.750 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734383299_collect-vmstat.pm.log 00:02:29.750 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734383299_collect-cpu-load.pm.log 00:02:30.692 21:08:20 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:30.692 21:08:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:30.692 21:08:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:30.692 21:08:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.692 21:08:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:30.692 Mon Dec 16 09:08:20 PM UTC 2024 00:02:30.692 21:08:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:30.692 v25.01-rc1-2-ge01cb43b8 00:02:30.692 21:08:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:30.692 21:08:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:30.692 21:08:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.692 21:08:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.692 21:08:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.692 ************************************ 00:02:30.692 START TEST asan 00:02:30.692 ************************************ 00:02:30.692 using asan 00:02:30.692 21:08:20 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:30.692 00:02:30.692 real 0m0.000s 00:02:30.692 user 0m0.000s 00:02:30.692 sys 0m0.000s 00:02:30.692 21:08:20 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.692 ************************************ 00:02:30.692 END TEST asan 00:02:30.692 ************************************ 00:02:30.692 21:08:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.692 21:08:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:30.692 21:08:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:30.692 21:08:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.692 21:08:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.692 21:08:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.954 ************************************ 00:02:30.954 START TEST ubsan 00:02:30.954 ************************************ 00:02:30.954 using ubsan 00:02:30.954 21:08:20 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:30.954 00:02:30.954 real 0m0.000s 00:02:30.954 user 0m0.000s 00:02:30.954 sys 0m0.000s 00:02:30.954 ************************************ 00:02:30.954 END TEST ubsan 00:02:30.954 ************************************ 00:02:30.954 21:08:20 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.954 21:08:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.954 21:08:20 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:30.954 21:08:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:30.954 21:08:20 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:30.954 21:08:20 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:30.954 21:08:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.954 21:08:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.954 ************************************ 00:02:30.954 START TEST build_native_dpdk 00:02:30.954 ************************************ 00:02:30.954 21:08:20 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:30.954 caf0f5d395 version: 22.11.4 00:02:30.954 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:30.954 dc9c799c7d vhost: fix missing spinlock unlock 00:02:30.954 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:30.954 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:30.954 patching file config/rte_config.h 00:02:30.954 Hunk #1 succeeded at 60 (offset 1 line). 00:02:30.954 21:08:20 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.954 21:08:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:30.955 21:08:20 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:30.955 patching file lib/pcapng/rte_pcapng.c 00:02:30.955 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:30.955 21:08:20 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:30.955 21:08:20 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:30.955 21:08:20 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:30.955 21:08:20 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:30.955 21:08:20 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:30.955 21:08:20 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:30.955 21:08:20 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:35.160 The Meson build system 00:02:35.160 Version: 1.5.0 00:02:35.160 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:35.160 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:35.160 Build type: native build 00:02:35.160 Program cat found: YES (/usr/bin/cat) 00:02:35.160 Project name: DPDK 00:02:35.160 Project version: 22.11.4 00:02:35.160 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:35.160 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:35.160 Host machine cpu family: x86_64 00:02:35.160 Host machine cpu: x86_64 00:02:35.160 Message: ## Building in Developer Mode ## 00:02:35.160 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:35.160 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:35.160 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:35.160 Program objdump found: YES (/usr/bin/objdump) 00:02:35.160 Program python3 found: YES (/usr/bin/python3) 00:02:35.160 Program cat found: YES (/usr/bin/cat) 00:02:35.160 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:35.160 Checking for size of "void *" : 8 00:02:35.160 Checking for size of "void *" : 8 (cached) 00:02:35.160 Library m found: YES 00:02:35.160 Library numa found: YES 00:02:35.160 Has header "numaif.h" : YES 00:02:35.160 Library fdt found: NO 00:02:35.160 Library execinfo found: NO 00:02:35.160 Has header "execinfo.h" : YES 00:02:35.160 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:35.160 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:35.160 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:35.160 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:35.160 Run-time dependency openssl found: YES 3.1.1 00:02:35.160 Run-time dependency libpcap found: YES 1.10.4 00:02:35.160 Has header "pcap.h" with dependency libpcap: YES 00:02:35.160 Compiler for C supports arguments -Wcast-qual: YES 00:02:35.160 Compiler for C supports arguments -Wdeprecated: YES 00:02:35.160 Compiler for C supports arguments -Wformat: YES 00:02:35.160 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:35.160 Compiler for C supports arguments -Wformat-security: NO 00:02:35.160 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.160 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:35.160 Compiler for C supports arguments -Wnested-externs: YES 00:02:35.160 Compiler for C supports arguments -Wold-style-definition: YES 00:02:35.160 Compiler for C supports arguments -Wpointer-arith: YES 00:02:35.160 Compiler for C supports arguments -Wsign-compare: YES 00:02:35.160 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:35.160 Compiler for C supports arguments -Wundef: YES 00:02:35.160 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.160 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:35.160 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:35.160 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.160 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:35.160 Compiler for C supports arguments -mavx512f: YES 00:02:35.160 Checking if "AVX512 checking" compiles: YES 00:02:35.160 Fetching value of define "__SSE4_2__" : 1 00:02:35.160 Fetching value of define "__AES__" : 1 00:02:35.160 Fetching value of define "__AVX__" : 1 00:02:35.160 Fetching value of define "__AVX2__" : 1 00:02:35.160 Fetching value of define "__AVX512BW__" : 1 00:02:35.161 Fetching value of define "__AVX512CD__" : 1 00:02:35.161 Fetching value of define "__AVX512DQ__" : 1 00:02:35.161 Fetching value of define "__AVX512F__" : 1 00:02:35.161 Fetching value of define "__AVX512VL__" : 1 00:02:35.161 Fetching value of define "__PCLMUL__" : 1 00:02:35.161 Fetching value of define "__RDRND__" : 1 00:02:35.161 Fetching value of define "__RDSEED__" : 1 00:02:35.161 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:35.161 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:35.161 Message: lib/kvargs: Defining dependency "kvargs" 00:02:35.161 Message: lib/telemetry: Defining dependency "telemetry" 00:02:35.161 Checking for function "getentropy" : YES 00:02:35.161 Message: lib/eal: Defining dependency "eal" 00:02:35.161 Message: lib/ring: Defining dependency "ring" 00:02:35.161 Message: lib/rcu: Defining dependency "rcu" 00:02:35.161 Message: lib/mempool: Defining dependency "mempool" 00:02:35.161 Message: lib/mbuf: Defining dependency "mbuf" 00:02:35.161 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:35.161 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:35.161 Compiler for C supports arguments -mpclmul: YES 00:02:35.161 Compiler for C supports arguments -maes: YES 00:02:35.161 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.161 Compiler for C supports arguments -mavx512bw: YES 00:02:35.161 Compiler for C supports arguments -mavx512dq: YES 00:02:35.161 Compiler for C supports arguments -mavx512vl: YES 00:02:35.161 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:35.161 Compiler for C supports arguments -mavx2: YES 00:02:35.161 Compiler for C supports arguments -mavx: YES 00:02:35.161 Message: lib/net: Defining dependency "net" 00:02:35.161 Message: lib/meter: Defining dependency "meter" 00:02:35.161 Message: lib/ethdev: Defining dependency "ethdev" 00:02:35.161 Message: lib/pci: Defining dependency "pci" 00:02:35.161 Message: lib/cmdline: Defining dependency "cmdline" 00:02:35.161 Message: lib/metrics: Defining dependency "metrics" 00:02:35.161 Message: lib/hash: Defining dependency "hash" 00:02:35.161 Message: lib/timer: Defining dependency "timer" 00:02:35.161 Fetching value of define "__AVX2__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:35.161 Message: lib/acl: Defining dependency "acl" 00:02:35.161 Message: lib/bbdev: Defining dependency "bbdev" 00:02:35.161 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:35.161 Run-time dependency libelf found: YES 0.191 00:02:35.161 Message: lib/bpf: Defining dependency "bpf" 00:02:35.161 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:35.161 Message: lib/compressdev: Defining dependency "compressdev" 00:02:35.161 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:35.161 Message: lib/distributor: Defining dependency "distributor" 00:02:35.161 Message: lib/efd: Defining dependency "efd" 00:02:35.161 Message: lib/eventdev: Defining dependency "eventdev" 00:02:35.161 Message: lib/gpudev: Defining dependency "gpudev" 00:02:35.161 Message: lib/gro: Defining dependency "gro" 00:02:35.161 Message: lib/gso: Defining dependency "gso" 00:02:35.161 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:35.161 Message: lib/jobstats: Defining dependency "jobstats" 00:02:35.161 Message: lib/latencystats: Defining dependency "latencystats" 00:02:35.161 Message: lib/lpm: Defining dependency "lpm" 00:02:35.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512IFMA__" : 1 00:02:35.161 Message: lib/member: Defining dependency "member" 00:02:35.161 Message: lib/pcapng: Defining dependency "pcapng" 00:02:35.161 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:35.161 Message: lib/power: Defining dependency "power" 00:02:35.161 Message: lib/rawdev: Defining dependency "rawdev" 00:02:35.161 Message: lib/regexdev: Defining dependency "regexdev" 00:02:35.161 Message: lib/dmadev: Defining dependency "dmadev" 00:02:35.161 Message: lib/rib: Defining dependency "rib" 00:02:35.161 Message: lib/reorder: Defining dependency "reorder" 00:02:35.161 Message: lib/sched: Defining dependency "sched" 00:02:35.161 Message: lib/security: Defining dependency "security" 00:02:35.161 Message: lib/stack: Defining dependency "stack" 00:02:35.161 Has header "linux/userfaultfd.h" : YES 00:02:35.161 Message: lib/vhost: Defining dependency "vhost" 00:02:35.161 Message: lib/ipsec: Defining dependency "ipsec" 00:02:35.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:35.161 Message: lib/fib: Defining dependency "fib" 00:02:35.161 Message: lib/port: Defining dependency "port" 00:02:35.161 Message: lib/pdump: Defining dependency "pdump" 00:02:35.161 Message: lib/table: Defining dependency "table" 00:02:35.161 Message: lib/pipeline: Defining dependency "pipeline" 00:02:35.161 Message: lib/graph: Defining dependency "graph" 00:02:35.161 Message: lib/node: Defining dependency "node" 00:02:35.161 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.161 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.161 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.161 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.161 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:35.161 Compiler for C supports arguments -Wno-unused-value: YES 00:02:35.161 Compiler for C supports arguments -Wno-format: YES 00:02:35.161 Compiler for C supports arguments -Wno-format-security: YES 00:02:35.161 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:35.161 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:35.161 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:35.161 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:36.544 Fetching value of define "__AVX2__" : 1 (cached) 00:02:36.544 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.544 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.544 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.544 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:36.544 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:36.544 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:36.544 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:36.544 Configuring doxy-api.conf using configuration 00:02:36.544 Program sphinx-build found: NO 00:02:36.544 Configuring rte_build_config.h using configuration 00:02:36.544 Message: 00:02:36.544 ================= 00:02:36.544 Applications Enabled 00:02:36.545 ================= 00:02:36.545 00:02:36.545 apps: 00:02:36.545 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:36.545 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:36.545 test-security-perf, 00:02:36.545 00:02:36.545 Message: 00:02:36.545 ================= 00:02:36.545 Libraries Enabled 00:02:36.545 ================= 00:02:36.545 00:02:36.545 libs: 00:02:36.545 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:36.545 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:36.545 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:36.545 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:36.545 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:36.545 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:36.545 table, pipeline, graph, node, 00:02:36.545 00:02:36.545 Message: 00:02:36.545 =============== 00:02:36.545 Drivers Enabled 00:02:36.545 =============== 00:02:36.545 00:02:36.545 common: 00:02:36.545 00:02:36.545 bus: 00:02:36.545 pci, vdev, 00:02:36.545 mempool: 00:02:36.545 ring, 00:02:36.545 dma: 00:02:36.545 00:02:36.545 net: 00:02:36.545 i40e, 00:02:36.545 raw: 00:02:36.545 00:02:36.545 crypto: 00:02:36.545 00:02:36.545 compress: 00:02:36.545 00:02:36.545 regex: 00:02:36.545 00:02:36.545 vdpa: 00:02:36.545 00:02:36.545 event: 00:02:36.545 00:02:36.545 baseband: 00:02:36.545 00:02:36.545 gpu: 00:02:36.545 00:02:36.545 00:02:36.545 Message: 00:02:36.545 ================= 00:02:36.545 Content Skipped 00:02:36.545 ================= 00:02:36.545 00:02:36.545 apps: 00:02:36.545 00:02:36.545 libs: 00:02:36.545 kni: explicitly disabled via build config (deprecated lib) 00:02:36.545 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:36.545 00:02:36.545 drivers: 00:02:36.545 common/cpt: not in enabled drivers build config 00:02:36.545 common/dpaax: not in enabled drivers build config 00:02:36.545 common/iavf: not in enabled drivers build config 00:02:36.545 common/idpf: not in enabled drivers build config 00:02:36.545 common/mvep: not in enabled drivers build config 00:02:36.545 common/octeontx: not in enabled drivers build config 00:02:36.545 bus/auxiliary: not in enabled drivers build config 00:02:36.545 bus/dpaa: not in enabled drivers build config 00:02:36.545 bus/fslmc: not in enabled drivers build config 00:02:36.545 bus/ifpga: not in enabled drivers build config 00:02:36.545 bus/vmbus: not in enabled drivers build config 00:02:36.545 common/cnxk: not in enabled drivers build config 00:02:36.545 common/mlx5: not in enabled drivers build config 00:02:36.545 common/qat: not in enabled drivers build config 00:02:36.545 common/sfc_efx: not in enabled drivers build config 00:02:36.545 mempool/bucket: not in enabled drivers build config 00:02:36.545 mempool/cnxk: not in enabled drivers build config 00:02:36.545 mempool/dpaa: not in enabled drivers build config 00:02:36.545 mempool/dpaa2: not in enabled drivers build config 00:02:36.545 mempool/octeontx: not in enabled drivers build config 00:02:36.545 mempool/stack: not in enabled drivers build config 00:02:36.545 dma/cnxk: not in enabled drivers build config 00:02:36.545 dma/dpaa: not in enabled drivers build config 00:02:36.545 dma/dpaa2: not in enabled drivers build config 00:02:36.545 dma/hisilicon: not in enabled drivers build config 00:02:36.545 dma/idxd: not in enabled drivers build config 00:02:36.545 dma/ioat: not in enabled drivers build config 00:02:36.545 dma/skeleton: not in enabled drivers build config 00:02:36.545 net/af_packet: not in enabled drivers build config 00:02:36.545 net/af_xdp: not in enabled drivers build config 00:02:36.545 net/ark: not in enabled drivers build config 00:02:36.545 net/atlantic: not in enabled drivers build config 00:02:36.545 net/avp: not in enabled drivers build config 00:02:36.545 net/axgbe: not in enabled drivers build config 00:02:36.545 net/bnx2x: not in enabled drivers build config 00:02:36.545 net/bnxt: not in enabled drivers build config 00:02:36.545 net/bonding: not in enabled drivers build config 00:02:36.545 net/cnxk: not in enabled drivers build config 00:02:36.545 net/cxgbe: not in enabled drivers build config 00:02:36.545 net/dpaa: not in enabled drivers build config 00:02:36.545 net/dpaa2: not in enabled drivers build config 00:02:36.545 net/e1000: not in enabled drivers build config 00:02:36.545 net/ena: not in enabled drivers build config 00:02:36.545 net/enetc: not in enabled drivers build config 00:02:36.545 net/enetfec: not in enabled drivers build config 00:02:36.545 net/enic: not in enabled drivers build config 00:02:36.545 net/failsafe: not in enabled drivers build config 00:02:36.545 net/fm10k: not in enabled drivers build config 00:02:36.545 net/gve: not in enabled drivers build config 00:02:36.545 net/hinic: not in enabled drivers build config 00:02:36.545 net/hns3: not in enabled drivers build config 00:02:36.545 net/iavf: not in enabled drivers build config 00:02:36.545 net/ice: not in enabled drivers build config 00:02:36.545 net/idpf: not in enabled drivers build config 00:02:36.545 net/igc: not in enabled drivers build config 00:02:36.545 net/ionic: not in enabled drivers build config 00:02:36.545 net/ipn3ke: not in enabled drivers build config 00:02:36.545 net/ixgbe: not in enabled drivers build config 00:02:36.545 net/kni: not in enabled drivers build config 00:02:36.545 net/liquidio: not in enabled drivers build config 00:02:36.545 net/mana: not in enabled drivers build config 00:02:36.545 net/memif: not in enabled drivers build config 00:02:36.545 net/mlx4: not in enabled drivers build config 00:02:36.545 net/mlx5: not in enabled drivers build config 00:02:36.545 net/mvneta: not in enabled drivers build config 00:02:36.545 net/mvpp2: not in enabled drivers build config 00:02:36.545 net/netvsc: not in enabled drivers build config 00:02:36.545 net/nfb: not in enabled drivers build config 00:02:36.545 net/nfp: not in enabled drivers build config 00:02:36.545 net/ngbe: not in enabled drivers build config 00:02:36.545 net/null: not in enabled drivers build config 00:02:36.545 net/octeontx: not in enabled drivers build config 00:02:36.545 net/octeon_ep: not in enabled drivers build config 00:02:36.545 net/pcap: not in enabled drivers build config 00:02:36.545 net/pfe: not in enabled drivers build config 00:02:36.545 net/qede: not in enabled drivers build config 00:02:36.545 net/ring: not in enabled drivers build config 00:02:36.545 net/sfc: not in enabled drivers build config 00:02:36.545 net/softnic: not in enabled drivers build config 00:02:36.545 net/tap: not in enabled drivers build config 00:02:36.545 net/thunderx: not in enabled drivers build config 00:02:36.545 net/txgbe: not in enabled drivers build config 00:02:36.545 net/vdev_netvsc: not in enabled drivers build config 00:02:36.545 net/vhost: not in enabled drivers build config 00:02:36.545 net/virtio: not in enabled drivers build config 00:02:36.545 net/vmxnet3: not in enabled drivers build config 00:02:36.545 raw/cnxk_bphy: not in enabled drivers build config 00:02:36.545 raw/cnxk_gpio: not in enabled drivers build config 00:02:36.545 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:36.545 raw/ifpga: not in enabled drivers build config 00:02:36.545 raw/ntb: not in enabled drivers build config 00:02:36.545 raw/skeleton: not in enabled drivers build config 00:02:36.545 crypto/armv8: not in enabled drivers build config 00:02:36.545 crypto/bcmfs: not in enabled drivers build config 00:02:36.545 crypto/caam_jr: not in enabled drivers build config 00:02:36.545 crypto/ccp: not in enabled drivers build config 00:02:36.545 crypto/cnxk: not in enabled drivers build config 00:02:36.545 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.545 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.545 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.545 crypto/mlx5: not in enabled drivers build config 00:02:36.545 crypto/mvsam: not in enabled drivers build config 00:02:36.545 crypto/nitrox: not in enabled drivers build config 00:02:36.545 crypto/null: not in enabled drivers build config 00:02:36.545 crypto/octeontx: not in enabled drivers build config 00:02:36.545 crypto/openssl: not in enabled drivers build config 00:02:36.545 crypto/scheduler: not in enabled drivers build config 00:02:36.545 crypto/uadk: not in enabled drivers build config 00:02:36.545 crypto/virtio: not in enabled drivers build config 00:02:36.545 compress/isal: not in enabled drivers build config 00:02:36.545 compress/mlx5: not in enabled drivers build config 00:02:36.545 compress/octeontx: not in enabled drivers build config 00:02:36.545 compress/zlib: not in enabled drivers build config 00:02:36.545 regex/mlx5: not in enabled drivers build config 00:02:36.545 regex/cn9k: not in enabled drivers build config 00:02:36.545 vdpa/ifc: not in enabled drivers build config 00:02:36.545 vdpa/mlx5: not in enabled drivers build config 00:02:36.545 vdpa/sfc: not in enabled drivers build config 00:02:36.545 event/cnxk: not in enabled drivers build config 00:02:36.545 event/dlb2: not in enabled drivers build config 00:02:36.545 event/dpaa: not in enabled drivers build config 00:02:36.545 event/dpaa2: not in enabled drivers build config 00:02:36.545 event/dsw: not in enabled drivers build config 00:02:36.546 event/opdl: not in enabled drivers build config 00:02:36.546 event/skeleton: not in enabled drivers build config 00:02:36.546 event/sw: not in enabled drivers build config 00:02:36.546 event/octeontx: not in enabled drivers build config 00:02:36.546 baseband/acc: not in enabled drivers build config 00:02:36.546 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:36.546 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:36.546 baseband/la12xx: not in enabled drivers build config 00:02:36.546 baseband/null: not in enabled drivers build config 00:02:36.546 baseband/turbo_sw: not in enabled drivers build config 00:02:36.546 gpu/cuda: not in enabled drivers build config 00:02:36.546 00:02:36.546 00:02:36.546 Build targets in project: 309 00:02:36.546 00:02:36.546 DPDK 22.11.4 00:02:36.546 00:02:36.546 User defined options 00:02:36.546 libdir : lib 00:02:36.546 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:36.546 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:36.546 c_link_args : 00:02:36.546 enable_docs : false 00:02:36.546 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:36.546 enable_kmods : false 00:02:36.546 machine : native 00:02:36.546 tests : false 00:02:36.546 00:02:36.546 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.546 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:36.807 21:08:26 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:36.807 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:36.807 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:36.807 [2/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:36.807 [3/738] Generating lib/rte_telemetry_def with a custom command 00:02:36.807 [4/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:36.807 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:36.807 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:36.807 [7/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:36.807 [8/738] Linking static target lib/librte_kvargs.a 00:02:36.807 [9/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:36.807 [10/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:36.807 [11/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.807 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:36.807 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.067 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.067 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.067 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.067 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:37.067 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:37.067 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:37.067 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.067 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:37.067 [22/738] Linking target lib/librte_kvargs.so.23.0 00:02:37.067 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.068 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.068 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.329 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.329 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.329 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.329 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.329 [30/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.329 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.329 [32/738] Linking static target lib/librte_telemetry.a 00:02:37.329 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.329 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.329 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.329 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.329 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.589 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.589 [39/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:37.589 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.589 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.589 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.589 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.589 [44/738] Linking target lib/librte_telemetry.so.23.0 00:02:37.589 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.848 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.848 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.848 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.848 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.848 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:37.848 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.848 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.848 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.848 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:37.848 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.848 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.848 [57/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:37.848 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.848 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.848 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.848 [61/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.848 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.848 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.848 [64/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.107 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:38.107 [66/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.107 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.107 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.108 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.108 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.108 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.108 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.108 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.108 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.108 [75/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.108 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.108 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.108 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.108 [79/738] Generating lib/rte_eal_def with a custom command 00:02:38.108 [80/738] Generating lib/rte_eal_mingw with a custom command 00:02:38.108 [81/738] Generating lib/rte_ring_def with a custom command 00:02:38.108 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:38.108 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:38.108 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:38.108 [85/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.108 [86/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.368 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.368 [88/738] Linking static target lib/librte_ring.a 00:02:38.368 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.368 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:38.368 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:38.368 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.368 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.628 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.629 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.629 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:38.629 [97/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.629 [98/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:38.629 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.629 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.629 [101/738] Linking static target lib/librte_eal.a 00:02:38.887 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.887 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.887 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.887 [105/738] Linking static target lib/librte_rcu.a 00:02:38.887 [106/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.887 [107/738] Linking static target lib/librte_mempool.a 00:02:38.887 [108/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.147 [109/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.147 [110/738] Generating lib/rte_net_def with a custom command 00:02:39.147 [111/738] Generating lib/rte_net_mingw with a custom command 00:02:39.147 [112/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.147 [113/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.147 [114/738] Generating lib/rte_meter_def with a custom command 00:02:39.147 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:39.147 [116/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.147 [117/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.147 [118/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.147 [119/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.147 [120/738] Linking static target lib/librte_meter.a 00:02:39.404 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.404 [122/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:39.404 [123/738] Linking static target lib/librte_net.a 00:02:39.404 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.662 [125/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.662 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:39.662 [127/738] Linking static target lib/librte_mbuf.a 00:02:39.662 [128/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.662 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.662 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.662 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.662 [132/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.919 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.919 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.919 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.919 [136/738] Generating lib/rte_ethdev_def with a custom command 00:02:39.919 [137/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.919 [138/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:39.919 [139/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.919 [140/738] Generating lib/rte_pci_def with a custom command 00:02:39.919 [141/738] Generating lib/rte_pci_mingw with a custom command 00:02:40.176 [142/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.176 [143/738] Linking static target lib/librte_pci.a 00:02:40.176 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.176 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.176 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.176 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.176 [148/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.176 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.176 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.176 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.176 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.176 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.176 [154/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:40.433 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:40.433 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:40.433 [157/738] Generating lib/rte_cmdline_def with a custom command 00:02:40.433 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:40.433 [159/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:40.433 [160/738] Generating lib/rte_metrics_def with a custom command 00:02:40.433 [161/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.433 [162/738] Generating lib/rte_metrics_mingw with a custom command 00:02:40.433 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.433 [164/738] Generating lib/rte_hash_def with a custom command 00:02:40.433 [165/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:40.433 [166/738] Generating lib/rte_hash_mingw with a custom command 00:02:40.433 [167/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.433 [168/738] Linking static target lib/librte_cmdline.a 00:02:40.433 [169/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.433 [170/738] Generating lib/rte_timer_def with a custom command 00:02:40.433 [171/738] Generating lib/rte_timer_mingw with a custom command 00:02:40.433 [172/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.691 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:40.691 [174/738] Linking static target lib/librte_metrics.a 00:02:40.691 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.691 [176/738] Linking static target lib/librte_timer.a 00:02:40.949 [177/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:40.949 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:40.949 [179/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.949 [180/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.949 [181/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.215 [182/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:41.215 [183/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:41.215 [184/738] Generating lib/rte_acl_def with a custom command 00:02:41.215 [185/738] Generating lib/rte_acl_mingw with a custom command 00:02:41.215 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:41.215 [187/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:41.215 [188/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:41.215 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:41.215 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:41.507 [191/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:41.507 [192/738] Linking static target lib/librte_bitratestats.a 00:02:41.507 [193/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:41.507 [194/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:41.507 [195/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.777 [196/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:41.777 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:41.777 [198/738] Linking static target lib/librte_bbdev.a 00:02:41.777 [199/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.777 [200/738] Linking static target lib/librte_ethdev.a 00:02:42.034 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:42.034 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:42.034 [203/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:42.292 [204/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.292 [205/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:42.292 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:42.292 [207/738] Generating lib/rte_bpf_def with a custom command 00:02:42.292 [208/738] Generating lib/rte_bpf_mingw with a custom command 00:02:42.549 [209/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:42.549 [210/738] Generating lib/rte_cfgfile_def with a custom command 00:02:42.549 [211/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:42.549 [212/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:42.549 [213/738] Linking static target lib/librte_cfgfile.a 00:02:42.549 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:42.807 [215/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.807 [216/738] Linking static target lib/librte_hash.a 00:02:42.807 [217/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:42.807 [218/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.807 [219/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:42.807 [220/738] Generating lib/rte_compressdev_def with a custom command 00:02:42.807 [221/738] Linking static target lib/librte_bpf.a 00:02:42.807 [222/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:42.807 [223/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:42.807 [224/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:43.066 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:43.066 [226/738] Generating lib/rte_cryptodev_def with a custom command 00:02:43.066 [227/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:43.066 [228/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.066 [229/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.066 [230/738] Linking static target lib/librte_compressdev.a 00:02:43.066 [231/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:43.325 [232/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.325 [233/738] Generating lib/rte_distributor_def with a custom command 00:02:43.325 [234/738] Generating lib/rte_distributor_mingw with a custom command 00:02:43.325 [235/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.325 [236/738] Generating lib/rte_efd_def with a custom command 00:02:43.325 [237/738] Generating lib/rte_efd_mingw with a custom command 00:02:43.325 [238/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:43.325 [239/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:43.325 [240/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:43.325 [241/738] Linking static target lib/librte_acl.a 00:02:43.583 [242/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.583 [243/738] Linking target lib/librte_eal.so.23.0 00:02:43.583 [244/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:43.583 [245/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:43.583 [246/738] Linking target lib/librte_ring.so.23.0 00:02:43.583 [247/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.583 [248/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:43.583 [249/738] Linking target lib/librte_meter.so.23.0 00:02:43.583 [250/738] Linking target lib/librte_pci.so.23.0 00:02:43.583 [251/738] Linking target lib/librte_timer.so.23.0 00:02:43.841 [252/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.841 [253/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:43.841 [254/738] Linking target lib/librte_acl.so.23.0 00:02:43.841 [255/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:43.841 [256/738] Linking target lib/librte_rcu.so.23.0 00:02:43.841 [257/738] Linking target lib/librte_mempool.so.23.0 00:02:43.841 [258/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:43.841 [259/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:43.841 [260/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:43.841 [261/738] Linking static target lib/librte_distributor.a 00:02:43.841 [262/738] Linking target lib/librte_cfgfile.so.23.0 00:02:43.841 [263/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:43.841 [264/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:43.841 [265/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:43.841 [266/738] Linking target lib/librte_mbuf.so.23.0 00:02:43.841 [267/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.099 [268/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:44.099 [269/738] Linking target lib/librte_net.so.23.0 00:02:44.099 [270/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:44.099 [271/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:44.099 [272/738] Linking target lib/librte_bbdev.so.23.0 00:02:44.099 [273/738] Linking target lib/librte_cmdline.so.23.0 00:02:44.099 [274/738] Linking target lib/librte_hash.so.23.0 00:02:44.099 [275/738] Linking target lib/librte_compressdev.so.23.0 00:02:44.099 [276/738] Linking target lib/librte_distributor.so.23.0 00:02:44.099 [277/738] Generating lib/rte_eventdev_def with a custom command 00:02:44.099 [278/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:44.099 [279/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:44.357 [280/738] Generating lib/rte_gpudev_def with a custom command 00:02:44.357 [281/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:44.357 [282/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:44.357 [283/738] Linking static target lib/librte_efd.a 00:02:44.357 [284/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:44.357 [285/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.357 [286/738] Linking target lib/librte_efd.so.23.0 00:02:44.615 [287/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.615 [288/738] Linking static target lib/librte_cryptodev.a 00:02:44.615 [289/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:44.615 [290/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:44.615 [291/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:44.615 [292/738] Linking static target lib/librte_gpudev.a 00:02:44.872 [293/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:44.872 [294/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:44.872 [295/738] Generating lib/rte_gro_def with a custom command 00:02:44.872 [296/738] Generating lib/rte_gro_mingw with a custom command 00:02:44.872 [297/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:44.872 [298/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:45.129 [299/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:45.129 [300/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:45.129 [301/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:45.129 [302/738] Linking static target lib/librte_gro.a 00:02:45.129 [303/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:45.129 [304/738] Generating lib/rte_gso_def with a custom command 00:02:45.129 [305/738] Generating lib/rte_gso_mingw with a custom command 00:02:45.129 [306/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.129 [307/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.129 [308/738] Linking target lib/librte_ethdev.so.23.0 00:02:45.129 [309/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:45.385 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:45.385 [311/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.385 [312/738] Linking target lib/librte_gpudev.so.23.0 00:02:45.385 [313/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:45.385 [314/738] Linking target lib/librte_metrics.so.23.0 00:02:45.385 [315/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:45.385 [316/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:45.385 [317/738] Linking target lib/librte_bpf.so.23.0 00:02:45.385 [318/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:45.385 [319/738] Linking target lib/librte_gro.so.23.0 00:02:45.385 [320/738] Linking static target lib/librte_gso.a 00:02:45.385 [321/738] Linking target lib/librte_bitratestats.so.23.0 00:02:45.642 [322/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:45.642 [323/738] Generating lib/rte_ip_frag_def with a custom command 00:02:45.642 [324/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:45.642 [325/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:45.642 [326/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.642 [327/738] Linking static target lib/librte_eventdev.a 00:02:45.642 [328/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:45.642 [329/738] Linking target lib/librte_gso.so.23.0 00:02:45.642 [330/738] Generating lib/rte_jobstats_def with a custom command 00:02:45.642 [331/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:45.642 [332/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:45.642 [333/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:45.642 [334/738] Generating lib/rte_latencystats_def with a custom command 00:02:45.642 [335/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:45.642 [336/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:45.642 [337/738] Generating lib/rte_lpm_def with a custom command 00:02:45.642 [338/738] Generating lib/rte_lpm_mingw with a custom command 00:02:45.642 [339/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:45.642 [340/738] Linking static target lib/librte_jobstats.a 00:02:45.899 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:45.899 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:45.899 [343/738] Linking static target lib/librte_ip_frag.a 00:02:45.899 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.899 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:45.899 [346/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:45.899 [347/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:45.899 [348/738] Linking static target lib/librte_latencystats.a 00:02:46.156 [349/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.156 [350/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:46.156 [351/738] Linking target lib/librte_ip_frag.so.23.0 00:02:46.156 [352/738] Generating lib/rte_member_def with a custom command 00:02:46.156 [353/738] Generating lib/rte_member_mingw with a custom command 00:02:46.156 [354/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.156 [355/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.156 [356/738] Linking target lib/librte_latencystats.so.23.0 00:02:46.156 [357/738] Linking target lib/librte_cryptodev.so.23.0 00:02:46.156 [358/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:46.156 [359/738] Generating lib/rte_pcapng_def with a custom command 00:02:46.156 [360/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:46.415 [361/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.415 [362/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:46.415 [363/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:46.415 [364/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.415 [365/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.415 [366/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:46.415 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:46.674 [368/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.674 [369/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:46.674 [370/738] Generating lib/rte_power_def with a custom command 00:02:46.674 [371/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:46.674 [372/738] Linking static target lib/librte_lpm.a 00:02:46.674 [373/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:46.674 [374/738] Generating lib/rte_power_mingw with a custom command 00:02:46.674 [375/738] Generating lib/rte_rawdev_def with a custom command 00:02:46.674 [376/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:46.674 [377/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.674 [378/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:46.674 [379/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.674 [380/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:46.932 [381/738] Linking target lib/librte_lpm.so.23.0 00:02:46.932 [382/738] Generating lib/rte_regexdev_def with a custom command 00:02:46.932 [383/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:46.932 [384/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:46.932 [385/738] Linking static target lib/librte_pcapng.a 00:02:46.932 [386/738] Generating lib/rte_dmadev_def with a custom command 00:02:46.932 [387/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:46.932 [388/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:46.932 [389/738] Linking static target lib/librte_rawdev.a 00:02:46.932 [390/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:46.932 [391/738] Generating lib/rte_rib_def with a custom command 00:02:46.932 [392/738] Generating lib/rte_rib_mingw with a custom command 00:02:46.932 [393/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.932 [394/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:46.932 [395/738] Linking static target lib/librte_regexdev.a 00:02:46.932 [396/738] Linking target lib/librte_eventdev.so.23.0 00:02:46.932 [397/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:46.932 [398/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.932 [399/738] Linking static target lib/librte_power.a 00:02:47.190 [400/738] Linking target lib/librte_pcapng.so.23.0 00:02:47.190 [401/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:47.190 [402/738] Generating lib/rte_reorder_def with a custom command 00:02:47.190 [403/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.190 [404/738] Linking static target lib/librte_dmadev.a 00:02:47.190 [405/738] Generating lib/rte_reorder_mingw with a custom command 00:02:47.190 [406/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:47.190 [407/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.190 [408/738] Linking target lib/librte_rawdev.so.23.0 00:02:47.190 [409/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:47.190 [410/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:47.190 [411/738] Generating lib/rte_sched_def with a custom command 00:02:47.190 [412/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:47.190 [413/738] Generating lib/rte_sched_mingw with a custom command 00:02:47.190 [414/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:47.190 [415/738] Generating lib/rte_security_def with a custom command 00:02:47.448 [416/738] Generating lib/rte_security_mingw with a custom command 00:02:47.448 [417/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.448 [418/738] Linking static target lib/librte_reorder.a 00:02:47.448 [419/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:47.448 [420/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:47.448 [421/738] Generating lib/rte_stack_def with a custom command 00:02:47.448 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:47.448 [423/738] Generating lib/rte_stack_mingw with a custom command 00:02:47.448 [424/738] Linking static target lib/librte_stack.a 00:02:47.448 [425/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.448 [426/738] Linking target lib/librte_dmadev.so.23.0 00:02:47.448 [427/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.448 [428/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.448 [429/738] Linking target lib/librte_reorder.so.23.0 00:02:47.448 [430/738] Linking target lib/librte_regexdev.so.23.0 00:02:47.448 [431/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.448 [432/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.706 [433/738] Linking target lib/librte_stack.so.23.0 00:02:47.706 [434/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:47.706 [435/738] Linking static target lib/librte_member.a 00:02:47.706 [436/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:47.706 [437/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:47.706 [438/738] Linking static target lib/librte_rib.a 00:02:47.706 [439/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.706 [440/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.706 [441/738] Linking static target lib/librte_security.a 00:02:47.706 [442/738] Linking target lib/librte_power.so.23.0 00:02:47.706 [443/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.963 [444/738] Linking target lib/librte_member.so.23.0 00:02:47.963 [445/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.963 [446/738] Generating lib/rte_vhost_def with a custom command 00:02:47.963 [447/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.963 [448/738] Generating lib/rte_vhost_mingw with a custom command 00:02:47.963 [449/738] Linking target lib/librte_rib.so.23.0 00:02:47.963 [450/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.963 [451/738] Linking target lib/librte_security.so.23.0 00:02:47.963 [452/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.963 [453/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.963 [454/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:47.964 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:48.221 [456/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:48.478 [457/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:48.478 [458/738] Linking static target lib/librte_sched.a 00:02:48.478 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:48.478 [460/738] Generating lib/rte_ipsec_def with a custom command 00:02:48.478 [461/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:48.478 [462/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:48.478 [463/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:48.735 [464/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:48.735 [465/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:48.735 [466/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.735 [467/738] Linking target lib/librte_sched.so.23.0 00:02:48.735 [468/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.735 [469/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:48.992 [470/738] Generating lib/rte_fib_def with a custom command 00:02:48.992 [471/738] Generating lib/rte_fib_mingw with a custom command 00:02:48.992 [472/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:48.992 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:48.992 [474/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:49.248 [475/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:49.248 [476/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:49.248 [477/738] Linking static target lib/librte_ipsec.a 00:02:49.248 [478/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:49.248 [479/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:49.248 [480/738] Linking static target lib/librte_fib.a 00:02:49.505 [481/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:49.505 [482/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.505 [483/738] Linking target lib/librte_ipsec.so.23.0 00:02:49.505 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:49.505 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:49.505 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:49.505 [487/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.505 [488/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:49.505 [489/738] Linking target lib/librte_fib.so.23.0 00:02:50.070 [490/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:50.070 [491/738] Generating lib/rte_port_def with a custom command 00:02:50.070 [492/738] Generating lib/rte_port_mingw with a custom command 00:02:50.070 [493/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:50.070 [494/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:50.070 [495/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:50.070 [496/738] Generating lib/rte_pdump_def with a custom command 00:02:50.070 [497/738] Generating lib/rte_pdump_mingw with a custom command 00:02:50.070 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:50.070 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:50.070 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:50.326 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:50.326 [502/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:50.326 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:50.326 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:50.326 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:50.583 [506/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:50.583 [507/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:50.583 [508/738] Linking static target lib/librte_port.a 00:02:50.583 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:50.583 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:50.583 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:50.583 [512/738] Linking static target lib/librte_pdump.a 00:02:50.840 [513/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.840 [514/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.840 [515/738] Linking target lib/librte_port.so.23.0 00:02:50.840 [516/738] Linking target lib/librte_pdump.so.23.0 00:02:50.840 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:50.840 [518/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:51.098 [519/738] Generating lib/rte_table_def with a custom command 00:02:51.098 [520/738] Generating lib/rte_table_mingw with a custom command 00:02:51.098 [521/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:51.098 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:51.098 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:51.098 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:51.098 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:51.098 [526/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.356 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:51.356 [528/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:51.356 [529/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:51.356 [530/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:51.356 [531/738] Linking static target lib/librte_table.a 00:02:51.356 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:51.613 [533/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:51.613 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:51.613 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:51.613 [536/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:51.613 [537/738] Generating lib/rte_graph_def with a custom command 00:02:51.613 [538/738] Generating lib/rte_graph_mingw with a custom command 00:02:51.613 [539/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.613 [540/738] Linking target lib/librte_table.so.23.0 00:02:51.870 [541/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:51.870 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:51.870 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:51.870 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:51.870 [545/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:51.870 [546/738] Linking static target lib/librte_graph.a 00:02:52.127 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:52.127 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:52.127 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:52.385 [550/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:52.385 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:52.385 [552/738] Generating lib/rte_node_def with a custom command 00:02:52.385 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:52.385 [554/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.385 [555/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:52.385 [556/738] Linking target lib/librte_graph.so.23.0 00:02:52.385 [557/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:52.385 [558/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.642 [559/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:52.642 [560/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:52.642 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.642 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:52.642 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:52.642 [564/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.642 [565/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:52.642 [566/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.642 [567/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.642 [568/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:52.642 [569/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:52.642 [570/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:52.642 [571/738] Linking static target lib/librte_node.a 00:02:52.642 [572/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:52.642 [573/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:52.900 [574/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.900 [575/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.900 [576/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.900 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.900 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.900 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.900 [580/738] Linking target lib/librte_node.so.23.0 00:02:52.900 [581/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.900 [582/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.900 [583/738] Linking static target drivers/librte_bus_vdev.a 00:02:52.900 [584/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.900 [585/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.900 [586/738] Linking static target drivers/librte_bus_pci.a 00:02:53.157 [587/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:53.157 [588/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.157 [589/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.157 [590/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.157 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:53.157 [592/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:53.157 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:53.414 [594/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.414 [595/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:53.414 [596/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:53.414 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:53.414 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.414 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.414 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:53.671 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.671 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.671 [603/738] Linking static target drivers/librte_mempool_ring.a 00:02:53.671 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.671 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:53.929 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:53.929 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:53.929 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:53.929 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:54.187 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:54.444 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:54.702 [612/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:54.702 [613/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:54.702 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:54.702 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:54.702 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:54.959 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:54.959 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:54.959 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:55.216 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:55.474 [621/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:55.733 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:55.733 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:55.733 [624/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:55.733 [625/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:55.991 [626/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:55.991 [627/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:55.991 [628/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:55.991 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:55.991 [630/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:55.991 [631/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:56.249 [632/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:56.249 [633/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:56.249 [634/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:56.507 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:56.507 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:56.507 [637/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:56.507 [638/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:56.507 [639/738] Linking static target drivers/librte_net_i40e.a 00:02:56.507 [640/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:56.507 [641/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:56.765 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:56.765 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:56.765 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:57.023 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:57.023 [646/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.023 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:57.023 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:57.023 [649/738] Linking target drivers/librte_net_i40e.so.23.0 00:02:57.023 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:57.280 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:57.280 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:57.280 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:57.280 [654/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:57.280 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:57.537 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:57.537 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:57.537 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:57.537 [659/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:57.537 [660/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:57.537 [661/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.537 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:57.537 [663/738] Linking static target lib/librte_vhost.a 00:02:57.795 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:58.052 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:58.052 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:58.311 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:58.311 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:58.311 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:58.311 [670/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.311 [671/738] Linking target lib/librte_vhost.so.23.0 00:02:58.568 [672/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:58.568 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:58.568 [674/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:58.568 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:58.568 [676/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:58.825 [677/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:58.825 [678/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:58.825 [679/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:58.825 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:58.825 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:58.825 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:58.825 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:59.083 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:59.083 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:59.083 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:59.083 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:59.083 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:59.340 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:59.340 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:59.340 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:59.597 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:59.597 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:59.597 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:59.856 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:59.856 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:00.113 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:00.113 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:00.113 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:00.371 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:00.371 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:00.371 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:00.371 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:00.628 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:00.628 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:00.628 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:00.885 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:01.143 [708/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:01.143 [709/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:01.143 [710/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:01.143 [711/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:01.143 [712/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:01.143 [713/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:01.143 [714/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:01.143 [715/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:01.399 [716/738] Linking static target lib/librte_pipeline.a 00:03:01.399 [717/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:01.657 [718/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:01.657 [719/738] Linking target app/dpdk-test-cmdline 00:03:01.657 [720/738] Linking target app/dpdk-dumpcap 00:03:01.657 [721/738] Linking target app/dpdk-pdump 00:03:01.657 [722/738] Linking target app/dpdk-test-bbdev 00:03:01.657 [723/738] Linking target app/dpdk-proc-info 00:03:01.657 [724/738] Linking target app/dpdk-test-acl 00:03:01.657 [725/738] Linking target app/dpdk-test-compress-perf 00:03:01.914 [726/738] Linking target app/dpdk-test-flow-perf 00:03:01.914 [727/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:01.914 [728/738] Linking target app/dpdk-test-fib 00:03:01.914 [729/738] Linking target app/dpdk-test-gpudev 00:03:01.914 [730/738] Linking target app/dpdk-test-crypto-perf 00:03:01.914 [731/738] Linking target app/dpdk-test-pipeline 00:03:01.914 [732/738] Linking target app/dpdk-test-eventdev 00:03:01.914 [733/738] Linking target app/dpdk-test-regex 00:03:01.914 [734/738] Linking target app/dpdk-testpmd 00:03:01.914 [735/738] Linking target app/dpdk-test-sad 00:03:01.914 [736/738] Linking target app/dpdk-test-security-perf 00:03:04.446 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.446 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:04.446 21:08:54 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:04.446 21:08:54 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:04.446 21:08:54 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:04.705 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:04.705 [0/1] Installing files. 00:03:04.968 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.968 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.969 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:04.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:04.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:04.973 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:04.973 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:04.973 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:04.973 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.235 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.236 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.236 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.236 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.236 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.236 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.236 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.237 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.238 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:05.239 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:05.239 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:05.239 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:05.239 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:05.239 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:05.239 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:05.239 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:05.239 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:05.239 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:05.239 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:05.239 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:05.239 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:05.239 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:05.239 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:05.239 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:05.239 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:05.239 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:05.239 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:05.239 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:05.239 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:05.239 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:05.239 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:05.239 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:05.239 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:05.239 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:05.239 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:05.240 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:05.240 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:05.240 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:05.240 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:05.240 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:05.240 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:05.240 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:05.240 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:05.240 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:05.240 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:05.240 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:05.240 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:05.240 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:05.240 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:05.240 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:05.240 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:05.240 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:05.240 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:05.240 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:05.240 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:05.240 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:05.240 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:05.240 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:05.240 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:05.240 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:05.240 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:05.240 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:05.240 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:05.240 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:05.240 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:05.240 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:05.240 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:05.240 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:05.240 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:05.240 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:05.240 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:05.240 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:05.240 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:05.240 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:05.240 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:05.240 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:05.240 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:05.240 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:05.240 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:05.240 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:05.240 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:05.240 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:05.240 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:05.240 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:05.240 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:05.240 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:05.240 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:05.240 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:05.240 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:05.240 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:05.240 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:05.240 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:05.240 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:05.240 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:05.240 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:05.240 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:05.240 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:05.240 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:05.240 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:05.240 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:05.240 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:05.240 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:05.240 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:05.240 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:05.240 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:05.240 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:05.240 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:05.240 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:05.240 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:05.240 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:05.240 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:05.240 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:05.240 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:05.240 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:05.240 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:05.240 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:05.240 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:05.240 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:05.240 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:05.240 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:05.240 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:05.240 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:05.240 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:05.240 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:05.240 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:05.240 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:05.241 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:05.241 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:05.241 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:05.241 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:05.241 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:05.241 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:05.241 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:05.241 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:05.241 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:05.241 21:08:54 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:05.241 21:08:54 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:05.241 00:03:05.241 real 0m34.413s 00:03:05.241 user 3m36.489s 00:03:05.241 sys 0m35.236s 00:03:05.241 ************************************ 00:03:05.241 END TEST build_native_dpdk 00:03:05.241 21:08:54 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.241 21:08:54 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:05.241 ************************************ 00:03:05.500 21:08:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:05.500 21:08:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:05.500 21:08:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:05.500 21:08:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:05.500 21:08:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:05.500 21:08:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:05.500 21:08:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:05.500 21:08:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:05.500 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:05.500 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.500 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:05.500 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:05.759 Using 'verbs' RDMA provider 00:03:16.698 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:26.670 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:26.671 Creating mk/config.mk...done. 00:03:26.671 Creating mk/cc.flags.mk...done. 00:03:26.671 Type 'make' to build. 00:03:26.671 21:09:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:26.671 21:09:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:26.671 21:09:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:26.671 21:09:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.931 ************************************ 00:03:26.931 START TEST make 00:03:26.931 ************************************ 00:03:26.931 21:09:16 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:27.192 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:27.192 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:27.192 meson setup builddir \ 00:03:27.192 -Dwith-libaio=enabled \ 00:03:27.192 -Dwith-liburing=enabled \ 00:03:27.192 -Dwith-libvfn=disabled \ 00:03:27.192 -Dwith-spdk=disabled \ 00:03:27.192 -Dexamples=false \ 00:03:27.192 -Dtests=false \ 00:03:27.192 -Dtools=false && \ 00:03:27.192 meson compile -C builddir && \ 00:03:27.192 cd -) 00:03:29.107 The Meson build system 00:03:29.107 Version: 1.5.0 00:03:29.107 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:29.107 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:29.107 Build type: native build 00:03:29.107 Project name: xnvme 00:03:29.107 Project version: 0.7.5 00:03:29.107 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:29.107 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:29.107 Host machine cpu family: x86_64 00:03:29.107 Host machine cpu: x86_64 00:03:29.107 Message: host_machine.system: linux 00:03:29.107 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:29.107 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:29.107 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:29.107 Run-time dependency threads found: YES 00:03:29.107 Has header "setupapi.h" : NO 00:03:29.107 Has header "linux/blkzoned.h" : YES 00:03:29.107 Has header "linux/blkzoned.h" : YES (cached) 00:03:29.107 Has header "libaio.h" : YES 00:03:29.107 Library aio found: YES 00:03:29.107 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:29.107 Run-time dependency liburing found: YES 2.2 00:03:29.107 Dependency libvfn skipped: feature with-libvfn disabled 00:03:29.107 Found CMake: /usr/bin/cmake (3.27.7) 00:03:29.107 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:29.107 Subproject spdk : skipped: feature with-spdk disabled 00:03:29.107 Run-time dependency appleframeworks found: NO (tried framework) 00:03:29.107 Run-time dependency appleframeworks found: NO (tried framework) 00:03:29.107 Library rt found: YES 00:03:29.107 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:29.107 Configuring xnvme_config.h using configuration 00:03:29.107 Configuring xnvme.spec using configuration 00:03:29.107 Run-time dependency bash-completion found: YES 2.11 00:03:29.107 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:29.107 Program cp found: YES (/usr/bin/cp) 00:03:29.107 Build targets in project: 3 00:03:29.107 00:03:29.107 xnvme 0.7.5 00:03:29.107 00:03:29.107 Subprojects 00:03:29.107 spdk : NO Feature 'with-spdk' disabled 00:03:29.107 00:03:29.107 User defined options 00:03:29.107 examples : false 00:03:29.107 tests : false 00:03:29.107 tools : false 00:03:29.107 with-libaio : enabled 00:03:29.107 with-liburing: enabled 00:03:29.107 with-libvfn : disabled 00:03:29.107 with-spdk : disabled 00:03:29.107 00:03:29.107 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:29.675 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:29.675 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:29.675 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:29.675 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:29.675 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:29.675 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:29.675 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:29.675 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:29.675 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:29.675 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:29.675 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:29.675 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:29.675 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:29.675 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:29.675 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:29.675 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:29.675 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:29.675 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:29.675 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:29.675 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:29.675 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:29.934 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:29.934 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:29.934 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:29.934 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:29.934 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:29.934 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:29.934 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:29.934 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:29.934 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:29.934 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:29.934 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:29.934 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:29.934 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:29.934 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:29.934 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:29.934 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:29.934 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:29.934 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:29.934 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:29.934 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:29.934 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:29.934 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:29.934 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:29.934 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:29.934 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:29.934 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:29.934 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:29.934 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:29.934 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:29.934 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:29.934 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:29.934 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:29.934 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:29.934 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:29.934 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:29.934 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:29.934 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:29.934 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:29.934 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:30.192 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:30.192 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:30.192 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:30.192 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:30.192 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:30.192 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:30.192 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:30.192 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:30.192 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:30.192 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:30.192 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:30.192 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:30.192 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:30.192 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:30.450 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:30.451 [75/76] Linking static target lib/libxnvme.a 00:03:30.451 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:30.451 INFO: autodetecting backend as ninja 00:03:30.451 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:30.451 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:02.521 CC lib/ut_mock/mock.o 00:04:02.521 CC lib/ut/ut.o 00:04:02.521 CC lib/log/log_flags.o 00:04:02.521 CC lib/log/log.o 00:04:02.521 CC lib/log/log_deprecated.o 00:04:02.521 LIB libspdk_ut_mock.a 00:04:02.521 LIB libspdk_ut.a 00:04:02.521 LIB libspdk_log.a 00:04:02.521 SO libspdk_ut_mock.so.6.0 00:04:02.521 SO libspdk_ut.so.2.0 00:04:02.521 SO libspdk_log.so.7.1 00:04:02.521 SYMLINK libspdk_ut_mock.so 00:04:02.521 SYMLINK libspdk_ut.so 00:04:02.521 SYMLINK libspdk_log.so 00:04:02.521 CC lib/dma/dma.o 00:04:02.521 CC lib/util/base64.o 00:04:02.521 CC lib/util/bit_array.o 00:04:02.521 CC lib/util/cpuset.o 00:04:02.521 CC lib/ioat/ioat.o 00:04:02.521 CC lib/util/crc16.o 00:04:02.521 CC lib/util/crc32.o 00:04:02.521 CC lib/util/crc32c.o 00:04:02.521 CXX lib/trace_parser/trace.o 00:04:02.521 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.521 CC lib/util/crc32_ieee.o 00:04:02.521 CC lib/util/crc64.o 00:04:02.521 CC lib/util/dif.o 00:04:02.521 CC lib/util/fd.o 00:04:02.521 LIB libspdk_dma.a 00:04:02.521 CC lib/util/fd_group.o 00:04:02.521 CC lib/util/file.o 00:04:02.521 SO libspdk_dma.so.5.0 00:04:02.521 CC lib/vfio_user/host/vfio_user.o 00:04:02.521 CC lib/util/hexlify.o 00:04:02.521 SYMLINK libspdk_dma.so 00:04:02.521 CC lib/util/iov.o 00:04:02.521 LIB libspdk_ioat.a 00:04:02.521 CC lib/util/math.o 00:04:02.521 SO libspdk_ioat.so.7.0 00:04:02.521 CC lib/util/net.o 00:04:02.521 CC lib/util/pipe.o 00:04:02.521 SYMLINK libspdk_ioat.so 00:04:02.521 CC lib/util/strerror_tls.o 00:04:02.521 CC lib/util/string.o 00:04:02.521 CC lib/util/uuid.o 00:04:02.521 LIB libspdk_vfio_user.a 00:04:02.521 CC lib/util/xor.o 00:04:02.521 SO libspdk_vfio_user.so.5.0 00:04:02.521 CC lib/util/zipf.o 00:04:02.521 CC lib/util/md5.o 00:04:02.521 SYMLINK libspdk_vfio_user.so 00:04:02.521 LIB libspdk_util.a 00:04:02.521 SO libspdk_util.so.10.1 00:04:02.521 LIB libspdk_trace_parser.a 00:04:02.521 SO libspdk_trace_parser.so.6.0 00:04:02.521 SYMLINK libspdk_util.so 00:04:02.521 SYMLINK libspdk_trace_parser.so 00:04:02.521 CC lib/conf/conf.o 00:04:02.521 CC lib/vmd/vmd.o 00:04:02.521 CC lib/vmd/led.o 00:04:02.521 CC lib/idxd/idxd.o 00:04:02.521 CC lib/idxd/idxd_user.o 00:04:02.521 CC lib/idxd/idxd_kernel.o 00:04:02.521 CC lib/rdma_utils/rdma_utils.o 00:04:02.521 CC lib/json/json_parse.o 00:04:02.521 CC lib/json/json_util.o 00:04:02.521 CC lib/env_dpdk/env.o 00:04:02.521 CC lib/env_dpdk/memory.o 00:04:02.521 CC lib/env_dpdk/pci.o 00:04:02.521 LIB libspdk_conf.a 00:04:02.521 SO libspdk_conf.so.6.0 00:04:02.521 CC lib/json/json_write.o 00:04:02.521 CC lib/env_dpdk/init.o 00:04:02.521 CC lib/env_dpdk/threads.o 00:04:02.521 LIB libspdk_rdma_utils.a 00:04:02.521 SYMLINK libspdk_conf.so 00:04:02.521 SO libspdk_rdma_utils.so.1.0 00:04:02.521 CC lib/env_dpdk/pci_ioat.o 00:04:02.521 SYMLINK libspdk_rdma_utils.so 00:04:02.521 CC lib/env_dpdk/pci_virtio.o 00:04:02.521 CC lib/env_dpdk/pci_vmd.o 00:04:02.521 CC lib/env_dpdk/pci_idxd.o 00:04:02.521 CC lib/rdma_provider/common.o 00:04:02.521 CC lib/env_dpdk/pci_event.o 00:04:02.521 CC lib/env_dpdk/sigbus_handler.o 00:04:02.521 LIB libspdk_json.a 00:04:02.521 CC lib/env_dpdk/pci_dpdk.o 00:04:02.521 SO libspdk_json.so.6.0 00:04:02.521 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:02.521 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:02.521 SYMLINK libspdk_json.so 00:04:02.521 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:02.521 LIB libspdk_idxd.a 00:04:02.521 SO libspdk_idxd.so.12.1 00:04:02.521 LIB libspdk_vmd.a 00:04:02.521 SO libspdk_vmd.so.6.0 00:04:02.521 SYMLINK libspdk_idxd.so 00:04:02.521 SYMLINK libspdk_vmd.so 00:04:02.521 LIB libspdk_rdma_provider.a 00:04:02.521 CC lib/jsonrpc/jsonrpc_server.o 00:04:02.521 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:02.521 CC lib/jsonrpc/jsonrpc_client.o 00:04:02.521 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:02.521 SO libspdk_rdma_provider.so.7.0 00:04:02.521 SYMLINK libspdk_rdma_provider.so 00:04:02.521 LIB libspdk_jsonrpc.a 00:04:02.521 SO libspdk_jsonrpc.so.6.0 00:04:02.521 SYMLINK libspdk_jsonrpc.so 00:04:02.521 CC lib/rpc/rpc.o 00:04:02.521 LIB libspdk_env_dpdk.a 00:04:02.521 SO libspdk_env_dpdk.so.15.1 00:04:02.521 SYMLINK libspdk_env_dpdk.so 00:04:02.521 LIB libspdk_rpc.a 00:04:02.521 SO libspdk_rpc.so.6.0 00:04:02.521 SYMLINK libspdk_rpc.so 00:04:02.521 CC lib/keyring/keyring.o 00:04:02.521 CC lib/notify/notify_rpc.o 00:04:02.521 CC lib/keyring/keyring_rpc.o 00:04:02.521 CC lib/notify/notify.o 00:04:02.521 CC lib/trace/trace_flags.o 00:04:02.521 CC lib/trace/trace.o 00:04:02.521 CC lib/trace/trace_rpc.o 00:04:02.781 LIB libspdk_notify.a 00:04:02.781 SO libspdk_notify.so.6.0 00:04:02.781 SYMLINK libspdk_notify.so 00:04:02.781 LIB libspdk_keyring.a 00:04:02.781 LIB libspdk_trace.a 00:04:02.781 SO libspdk_keyring.so.2.0 00:04:02.781 SO libspdk_trace.so.11.0 00:04:02.781 SYMLINK libspdk_keyring.so 00:04:02.781 SYMLINK libspdk_trace.so 00:04:03.042 CC lib/sock/sock.o 00:04:03.042 CC lib/sock/sock_rpc.o 00:04:03.042 CC lib/thread/thread.o 00:04:03.042 CC lib/thread/iobuf.o 00:04:03.614 LIB libspdk_sock.a 00:04:03.614 SO libspdk_sock.so.10.0 00:04:03.614 SYMLINK libspdk_sock.so 00:04:03.875 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:03.875 CC lib/nvme/nvme_fabric.o 00:04:03.875 CC lib/nvme/nvme_ctrlr.o 00:04:03.875 CC lib/nvme/nvme_ns_cmd.o 00:04:03.875 CC lib/nvme/nvme_ns.o 00:04:03.875 CC lib/nvme/nvme_pcie_common.o 00:04:03.875 CC lib/nvme/nvme_qpair.o 00:04:03.875 CC lib/nvme/nvme_pcie.o 00:04:03.875 CC lib/nvme/nvme.o 00:04:04.448 CC lib/nvme/nvme_quirks.o 00:04:04.448 CC lib/nvme/nvme_transport.o 00:04:04.448 CC lib/nvme/nvme_discovery.o 00:04:04.448 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.710 LIB libspdk_thread.a 00:04:04.710 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.710 CC lib/nvme/nvme_tcp.o 00:04:04.710 SO libspdk_thread.so.11.0 00:04:04.710 CC lib/nvme/nvme_opal.o 00:04:04.710 SYMLINK libspdk_thread.so 00:04:04.710 CC lib/nvme/nvme_io_msg.o 00:04:04.710 CC lib/nvme/nvme_poll_group.o 00:04:04.971 CC lib/nvme/nvme_zns.o 00:04:04.971 CC lib/nvme/nvme_stubs.o 00:04:05.232 CC lib/nvme/nvme_auth.o 00:04:05.232 CC lib/accel/accel.o 00:04:05.232 CC lib/accel/accel_rpc.o 00:04:05.232 CC lib/nvme/nvme_cuse.o 00:04:05.232 CC lib/accel/accel_sw.o 00:04:05.494 CC lib/nvme/nvme_rdma.o 00:04:05.494 CC lib/blob/blobstore.o 00:04:05.494 CC lib/init/json_config.o 00:04:05.494 CC lib/blob/request.o 00:04:05.754 CC lib/virtio/virtio.o 00:04:05.754 CC lib/init/subsystem.o 00:04:06.015 CC lib/virtio/virtio_vhost_user.o 00:04:06.015 CC lib/virtio/virtio_vfio_user.o 00:04:06.015 CC lib/virtio/virtio_pci.o 00:04:06.015 CC lib/init/subsystem_rpc.o 00:04:06.015 CC lib/blob/zeroes.o 00:04:06.015 CC lib/init/rpc.o 00:04:06.015 CC lib/blob/blob_bs_dev.o 00:04:06.276 CC lib/fsdev/fsdev.o 00:04:06.276 CC lib/fsdev/fsdev_io.o 00:04:06.276 CC lib/fsdev/fsdev_rpc.o 00:04:06.276 LIB libspdk_virtio.a 00:04:06.276 LIB libspdk_accel.a 00:04:06.276 LIB libspdk_init.a 00:04:06.276 SO libspdk_virtio.so.7.0 00:04:06.276 SO libspdk_accel.so.16.0 00:04:06.276 SO libspdk_init.so.6.0 00:04:06.276 SYMLINK libspdk_virtio.so 00:04:06.276 SYMLINK libspdk_init.so 00:04:06.276 SYMLINK libspdk_accel.so 00:04:06.537 CC lib/bdev/bdev.o 00:04:06.537 CC lib/bdev/bdev_rpc.o 00:04:06.537 CC lib/bdev/bdev_zone.o 00:04:06.537 CC lib/bdev/scsi_nvme.o 00:04:06.537 CC lib/bdev/part.o 00:04:06.537 CC lib/event/app.o 00:04:06.537 CC lib/event/reactor.o 00:04:06.798 CC lib/event/log_rpc.o 00:04:06.798 CC lib/event/app_rpc.o 00:04:06.798 LIB libspdk_fsdev.a 00:04:06.799 SO libspdk_fsdev.so.2.0 00:04:06.799 LIB libspdk_nvme.a 00:04:06.799 CC lib/event/scheduler_static.o 00:04:06.799 SYMLINK libspdk_fsdev.so 00:04:07.060 SO libspdk_nvme.so.15.0 00:04:07.060 LIB libspdk_event.a 00:04:07.060 SO libspdk_event.so.14.0 00:04:07.060 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:07.060 SYMLINK libspdk_event.so 00:04:07.321 SYMLINK libspdk_nvme.so 00:04:07.892 LIB libspdk_fuse_dispatcher.a 00:04:07.892 SO libspdk_fuse_dispatcher.so.1.0 00:04:07.892 SYMLINK libspdk_fuse_dispatcher.so 00:04:08.834 LIB libspdk_blob.a 00:04:08.834 SO libspdk_blob.so.12.0 00:04:08.834 SYMLINK libspdk_blob.so 00:04:09.095 CC lib/lvol/lvol.o 00:04:09.095 CC lib/blobfs/blobfs.o 00:04:09.095 CC lib/blobfs/tree.o 00:04:09.355 LIB libspdk_bdev.a 00:04:09.355 SO libspdk_bdev.so.17.0 00:04:09.614 SYMLINK libspdk_bdev.so 00:04:09.614 CC lib/scsi/dev.o 00:04:09.614 CC lib/scsi/lun.o 00:04:09.614 CC lib/nbd/nbd.o 00:04:09.614 CC lib/nbd/nbd_rpc.o 00:04:09.614 CC lib/scsi/port.o 00:04:09.614 CC lib/nvmf/ctrlr.o 00:04:09.614 CC lib/ublk/ublk.o 00:04:09.614 CC lib/ftl/ftl_core.o 00:04:09.874 CC lib/scsi/scsi.o 00:04:09.874 CC lib/scsi/scsi_bdev.o 00:04:09.874 CC lib/scsi/scsi_pr.o 00:04:09.874 CC lib/scsi/scsi_rpc.o 00:04:09.874 CC lib/scsi/task.o 00:04:09.874 LIB libspdk_blobfs.a 00:04:10.134 SO libspdk_blobfs.so.11.0 00:04:10.134 CC lib/ftl/ftl_init.o 00:04:10.134 CC lib/ublk/ublk_rpc.o 00:04:10.134 SYMLINK libspdk_blobfs.so 00:04:10.134 CC lib/ftl/ftl_layout.o 00:04:10.134 LIB libspdk_nbd.a 00:04:10.134 LIB libspdk_lvol.a 00:04:10.134 SO libspdk_lvol.so.11.0 00:04:10.134 SO libspdk_nbd.so.7.0 00:04:10.134 CC lib/ftl/ftl_debug.o 00:04:10.134 SYMLINK libspdk_lvol.so 00:04:10.134 SYMLINK libspdk_nbd.so 00:04:10.134 CC lib/ftl/ftl_io.o 00:04:10.134 CC lib/ftl/ftl_sb.o 00:04:10.134 CC lib/ftl/ftl_l2p.o 00:04:10.134 CC lib/nvmf/ctrlr_discovery.o 00:04:10.134 CC lib/ftl/ftl_l2p_flat.o 00:04:10.394 LIB libspdk_ublk.a 00:04:10.395 LIB libspdk_scsi.a 00:04:10.395 CC lib/ftl/ftl_nv_cache.o 00:04:10.395 SO libspdk_ublk.so.3.0 00:04:10.395 SO libspdk_scsi.so.9.0 00:04:10.395 CC lib/nvmf/ctrlr_bdev.o 00:04:10.395 CC lib/nvmf/subsystem.o 00:04:10.395 CC lib/ftl/ftl_band.o 00:04:10.395 SYMLINK libspdk_ublk.so 00:04:10.395 CC lib/ftl/ftl_band_ops.o 00:04:10.395 CC lib/nvmf/nvmf.o 00:04:10.395 CC lib/nvmf/nvmf_rpc.o 00:04:10.395 SYMLINK libspdk_scsi.so 00:04:10.395 CC lib/nvmf/transport.o 00:04:10.655 CC lib/ftl/ftl_writer.o 00:04:10.655 CC lib/nvmf/tcp.o 00:04:10.915 CC lib/nvmf/stubs.o 00:04:10.915 CC lib/nvmf/mdns_server.o 00:04:11.174 CC lib/nvmf/rdma.o 00:04:11.174 CC lib/iscsi/conn.o 00:04:11.174 CC lib/iscsi/init_grp.o 00:04:11.432 CC lib/ftl/ftl_rq.o 00:04:11.432 CC lib/vhost/vhost.o 00:04:11.432 CC lib/ftl/ftl_reloc.o 00:04:11.432 CC lib/nvmf/auth.o 00:04:11.432 CC lib/iscsi/iscsi.o 00:04:11.432 CC lib/iscsi/param.o 00:04:11.689 CC lib/vhost/vhost_rpc.o 00:04:11.689 CC lib/vhost/vhost_scsi.o 00:04:11.689 CC lib/ftl/ftl_l2p_cache.o 00:04:11.689 CC lib/iscsi/portal_grp.o 00:04:11.947 CC lib/iscsi/tgt_node.o 00:04:11.947 CC lib/iscsi/iscsi_subsystem.o 00:04:11.947 CC lib/vhost/vhost_blk.o 00:04:12.204 CC lib/vhost/rte_vhost_user.o 00:04:12.204 CC lib/iscsi/iscsi_rpc.o 00:04:12.204 CC lib/ftl/ftl_p2l.o 00:04:12.204 CC lib/ftl/ftl_p2l_log.o 00:04:12.204 CC lib/iscsi/task.o 00:04:12.204 CC lib/ftl/mngt/ftl_mngt.o 00:04:12.461 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:12.461 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:12.461 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:12.461 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:12.461 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:12.462 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:12.719 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:12.719 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:12.719 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:12.719 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:12.719 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:12.719 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:12.719 CC lib/ftl/utils/ftl_conf.o 00:04:12.719 LIB libspdk_iscsi.a 00:04:12.719 CC lib/ftl/utils/ftl_md.o 00:04:12.977 CC lib/ftl/utils/ftl_mempool.o 00:04:12.977 CC lib/ftl/utils/ftl_bitmap.o 00:04:12.977 CC lib/ftl/utils/ftl_property.o 00:04:12.977 SO libspdk_iscsi.so.8.0 00:04:12.977 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:12.977 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:12.977 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:12.977 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:12.977 SYMLINK libspdk_iscsi.so 00:04:12.977 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:12.977 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:13.235 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:13.235 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:13.235 LIB libspdk_vhost.a 00:04:13.235 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:13.235 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:13.235 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:13.235 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:13.235 SO libspdk_vhost.so.8.0 00:04:13.235 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:13.235 CC lib/ftl/base/ftl_base_dev.o 00:04:13.235 CC lib/ftl/base/ftl_base_bdev.o 00:04:13.235 SYMLINK libspdk_vhost.so 00:04:13.235 CC lib/ftl/ftl_trace.o 00:04:13.493 LIB libspdk_nvmf.a 00:04:13.493 LIB libspdk_ftl.a 00:04:13.493 SO libspdk_nvmf.so.20.0 00:04:13.753 SYMLINK libspdk_nvmf.so 00:04:13.753 SO libspdk_ftl.so.9.0 00:04:13.753 SYMLINK libspdk_ftl.so 00:04:14.011 CC module/env_dpdk/env_dpdk_rpc.o 00:04:14.268 CC module/scheduler/gscheduler/gscheduler.o 00:04:14.268 CC module/sock/posix/posix.o 00:04:14.268 CC module/keyring/file/keyring.o 00:04:14.268 CC module/blob/bdev/blob_bdev.o 00:04:14.268 CC module/accel/error/accel_error.o 00:04:14.268 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:14.268 CC module/accel/ioat/accel_ioat.o 00:04:14.268 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:14.268 CC module/fsdev/aio/fsdev_aio.o 00:04:14.268 LIB libspdk_env_dpdk_rpc.a 00:04:14.268 SO libspdk_env_dpdk_rpc.so.6.0 00:04:14.268 CC module/keyring/file/keyring_rpc.o 00:04:14.268 SYMLINK libspdk_env_dpdk_rpc.so 00:04:14.268 CC module/accel/error/accel_error_rpc.o 00:04:14.268 LIB libspdk_scheduler_gscheduler.a 00:04:14.268 LIB libspdk_scheduler_dpdk_governor.a 00:04:14.268 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:14.268 SO libspdk_scheduler_gscheduler.so.4.0 00:04:14.268 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:14.268 CC module/accel/ioat/accel_ioat_rpc.o 00:04:14.268 LIB libspdk_keyring_file.a 00:04:14.268 LIB libspdk_scheduler_dynamic.a 00:04:14.268 SYMLINK libspdk_scheduler_gscheduler.so 00:04:14.268 LIB libspdk_blob_bdev.a 00:04:14.268 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:14.268 SO libspdk_keyring_file.so.2.0 00:04:14.268 SO libspdk_scheduler_dynamic.so.4.0 00:04:14.268 SO libspdk_blob_bdev.so.12.0 00:04:14.526 LIB libspdk_accel_error.a 00:04:14.526 SYMLINK libspdk_keyring_file.so 00:04:14.526 SYMLINK libspdk_scheduler_dynamic.so 00:04:14.526 CC module/fsdev/aio/linux_aio_mgr.o 00:04:14.526 SO libspdk_accel_error.so.2.0 00:04:14.526 SYMLINK libspdk_blob_bdev.so 00:04:14.526 LIB libspdk_accel_ioat.a 00:04:14.526 SYMLINK libspdk_accel_error.so 00:04:14.527 SO libspdk_accel_ioat.so.6.0 00:04:14.527 CC module/accel/dsa/accel_dsa.o 00:04:14.527 SYMLINK libspdk_accel_ioat.so 00:04:14.527 CC module/keyring/linux/keyring.o 00:04:14.527 CC module/accel/dsa/accel_dsa_rpc.o 00:04:14.527 CC module/accel/iaa/accel_iaa.o 00:04:14.527 CC module/keyring/linux/keyring_rpc.o 00:04:14.527 CC module/bdev/delay/vbdev_delay.o 00:04:14.784 CC module/bdev/error/vbdev_error.o 00:04:14.784 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:14.784 CC module/blobfs/bdev/blobfs_bdev.o 00:04:14.784 LIB libspdk_keyring_linux.a 00:04:14.784 LIB libspdk_accel_dsa.a 00:04:14.784 SO libspdk_keyring_linux.so.1.0 00:04:14.784 SO libspdk_accel_dsa.so.5.0 00:04:14.784 CC module/accel/iaa/accel_iaa_rpc.o 00:04:14.784 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:14.785 SYMLINK libspdk_keyring_linux.so 00:04:14.785 CC module/bdev/gpt/gpt.o 00:04:14.785 CC module/bdev/gpt/vbdev_gpt.o 00:04:14.785 LIB libspdk_fsdev_aio.a 00:04:14.785 SYMLINK libspdk_accel_dsa.so 00:04:14.785 CC module/bdev/error/vbdev_error_rpc.o 00:04:14.785 SO libspdk_fsdev_aio.so.1.0 00:04:14.785 LIB libspdk_accel_iaa.a 00:04:14.785 SO libspdk_accel_iaa.so.3.0 00:04:14.785 LIB libspdk_sock_posix.a 00:04:15.042 SYMLINK libspdk_fsdev_aio.so 00:04:15.042 LIB libspdk_blobfs_bdev.a 00:04:15.042 SO libspdk_sock_posix.so.6.0 00:04:15.042 LIB libspdk_bdev_delay.a 00:04:15.042 SYMLINK libspdk_accel_iaa.so 00:04:15.042 SO libspdk_blobfs_bdev.so.6.0 00:04:15.043 SO libspdk_bdev_delay.so.6.0 00:04:15.043 LIB libspdk_bdev_error.a 00:04:15.043 CC module/bdev/lvol/vbdev_lvol.o 00:04:15.043 SYMLINK libspdk_blobfs_bdev.so 00:04:15.043 SO libspdk_bdev_error.so.6.0 00:04:15.043 SYMLINK libspdk_sock_posix.so 00:04:15.043 SYMLINK libspdk_bdev_delay.so 00:04:15.043 CC module/bdev/malloc/bdev_malloc.o 00:04:15.043 SYMLINK libspdk_bdev_error.so 00:04:15.043 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:15.043 CC module/bdev/null/bdev_null.o 00:04:15.043 LIB libspdk_bdev_gpt.a 00:04:15.043 CC module/bdev/nvme/bdev_nvme.o 00:04:15.043 SO libspdk_bdev_gpt.so.6.0 00:04:15.043 CC module/bdev/raid/bdev_raid.o 00:04:15.043 CC module/bdev/passthru/vbdev_passthru.o 00:04:15.043 SYMLINK libspdk_bdev_gpt.so 00:04:15.043 CC module/bdev/raid/bdev_raid_rpc.o 00:04:15.043 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:15.043 CC module/bdev/split/vbdev_split.o 00:04:15.301 CC module/bdev/raid/bdev_raid_sb.o 00:04:15.301 CC module/bdev/null/bdev_null_rpc.o 00:04:15.301 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:15.301 CC module/bdev/split/vbdev_split_rpc.o 00:04:15.301 LIB libspdk_bdev_malloc.a 00:04:15.301 SO libspdk_bdev_malloc.so.6.0 00:04:15.301 LIB libspdk_bdev_null.a 00:04:15.301 SYMLINK libspdk_bdev_malloc.so 00:04:15.301 CC module/bdev/nvme/nvme_rpc.o 00:04:15.301 SO libspdk_bdev_null.so.6.0 00:04:15.301 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:15.301 LIB libspdk_bdev_split.a 00:04:15.567 SYMLINK libspdk_bdev_null.so 00:04:15.567 SO libspdk_bdev_split.so.6.0 00:04:15.567 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:15.567 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:15.567 SYMLINK libspdk_bdev_split.so 00:04:15.567 CC module/bdev/raid/raid0.o 00:04:15.567 LIB libspdk_bdev_passthru.a 00:04:15.567 CC module/bdev/xnvme/bdev_xnvme.o 00:04:15.567 SO libspdk_bdev_passthru.so.6.0 00:04:15.567 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:15.567 LIB libspdk_bdev_zone_block.a 00:04:15.567 SO libspdk_bdev_zone_block.so.6.0 00:04:15.567 CC module/bdev/aio/bdev_aio.o 00:04:15.567 SYMLINK libspdk_bdev_passthru.so 00:04:15.567 CC module/bdev/aio/bdev_aio_rpc.o 00:04:15.826 SYMLINK libspdk_bdev_zone_block.so 00:04:15.826 CC module/bdev/raid/raid1.o 00:04:15.826 LIB libspdk_bdev_lvol.a 00:04:15.826 CC module/bdev/raid/concat.o 00:04:15.826 SO libspdk_bdev_lvol.so.6.0 00:04:15.826 CC module/bdev/nvme/bdev_mdns_client.o 00:04:15.826 CC module/bdev/nvme/vbdev_opal.o 00:04:15.826 SYMLINK libspdk_bdev_lvol.so 00:04:15.826 LIB libspdk_bdev_xnvme.a 00:04:15.826 CC module/bdev/ftl/bdev_ftl.o 00:04:15.826 SO libspdk_bdev_xnvme.so.3.0 00:04:15.826 SYMLINK libspdk_bdev_xnvme.so 00:04:15.826 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:15.826 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:15.826 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:16.084 CC module/bdev/iscsi/bdev_iscsi.o 00:04:16.084 LIB libspdk_bdev_aio.a 00:04:16.084 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:16.084 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:16.084 SO libspdk_bdev_aio.so.6.0 00:04:16.084 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:16.084 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:16.084 SYMLINK libspdk_bdev_aio.so 00:04:16.084 LIB libspdk_bdev_ftl.a 00:04:16.084 SO libspdk_bdev_ftl.so.6.0 00:04:16.084 LIB libspdk_bdev_raid.a 00:04:16.084 SYMLINK libspdk_bdev_ftl.so 00:04:16.084 SO libspdk_bdev_raid.so.6.0 00:04:16.341 SYMLINK libspdk_bdev_raid.so 00:04:16.341 LIB libspdk_bdev_iscsi.a 00:04:16.341 SO libspdk_bdev_iscsi.so.6.0 00:04:16.341 SYMLINK libspdk_bdev_iscsi.so 00:04:16.341 LIB libspdk_bdev_virtio.a 00:04:16.600 SO libspdk_bdev_virtio.so.6.0 00:04:16.600 SYMLINK libspdk_bdev_virtio.so 00:04:17.197 LIB libspdk_bdev_nvme.a 00:04:17.457 SO libspdk_bdev_nvme.so.7.1 00:04:17.457 SYMLINK libspdk_bdev_nvme.so 00:04:18.024 CC module/event/subsystems/iobuf/iobuf.o 00:04:18.024 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:18.024 CC module/event/subsystems/fsdev/fsdev.o 00:04:18.024 CC module/event/subsystems/vmd/vmd.o 00:04:18.024 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:18.024 CC module/event/subsystems/scheduler/scheduler.o 00:04:18.024 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:18.024 CC module/event/subsystems/sock/sock.o 00:04:18.024 CC module/event/subsystems/keyring/keyring.o 00:04:18.024 LIB libspdk_event_scheduler.a 00:04:18.024 LIB libspdk_event_sock.a 00:04:18.024 LIB libspdk_event_fsdev.a 00:04:18.024 LIB libspdk_event_keyring.a 00:04:18.024 SO libspdk_event_scheduler.so.4.0 00:04:18.024 LIB libspdk_event_vmd.a 00:04:18.024 LIB libspdk_event_vhost_blk.a 00:04:18.024 SO libspdk_event_sock.so.5.0 00:04:18.024 LIB libspdk_event_iobuf.a 00:04:18.024 SO libspdk_event_fsdev.so.1.0 00:04:18.024 SO libspdk_event_keyring.so.1.0 00:04:18.024 SO libspdk_event_vmd.so.6.0 00:04:18.024 SO libspdk_event_vhost_blk.so.3.0 00:04:18.024 SO libspdk_event_iobuf.so.3.0 00:04:18.024 SYMLINK libspdk_event_scheduler.so 00:04:18.024 SYMLINK libspdk_event_sock.so 00:04:18.024 SYMLINK libspdk_event_fsdev.so 00:04:18.024 SYMLINK libspdk_event_keyring.so 00:04:18.024 SYMLINK libspdk_event_vmd.so 00:04:18.024 SYMLINK libspdk_event_iobuf.so 00:04:18.024 SYMLINK libspdk_event_vhost_blk.so 00:04:18.283 CC module/event/subsystems/accel/accel.o 00:04:18.542 LIB libspdk_event_accel.a 00:04:18.542 SO libspdk_event_accel.so.6.0 00:04:18.542 SYMLINK libspdk_event_accel.so 00:04:18.801 CC module/event/subsystems/bdev/bdev.o 00:04:18.801 LIB libspdk_event_bdev.a 00:04:19.060 SO libspdk_event_bdev.so.6.0 00:04:19.060 SYMLINK libspdk_event_bdev.so 00:04:19.060 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:19.060 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:19.060 CC module/event/subsystems/scsi/scsi.o 00:04:19.060 CC module/event/subsystems/ublk/ublk.o 00:04:19.060 CC module/event/subsystems/nbd/nbd.o 00:04:19.319 LIB libspdk_event_ublk.a 00:04:19.319 LIB libspdk_event_scsi.a 00:04:19.319 LIB libspdk_event_nbd.a 00:04:19.319 SO libspdk_event_ublk.so.3.0 00:04:19.319 SO libspdk_event_scsi.so.6.0 00:04:19.319 SO libspdk_event_nbd.so.6.0 00:04:19.319 SYMLINK libspdk_event_ublk.so 00:04:19.319 SYMLINK libspdk_event_scsi.so 00:04:19.319 SYMLINK libspdk_event_nbd.so 00:04:19.319 LIB libspdk_event_nvmf.a 00:04:19.319 SO libspdk_event_nvmf.so.6.0 00:04:19.319 SYMLINK libspdk_event_nvmf.so 00:04:19.578 CC module/event/subsystems/iscsi/iscsi.o 00:04:19.578 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:19.578 LIB libspdk_event_vhost_scsi.a 00:04:19.578 LIB libspdk_event_iscsi.a 00:04:19.578 SO libspdk_event_vhost_scsi.so.3.0 00:04:19.578 SO libspdk_event_iscsi.so.6.0 00:04:19.839 SYMLINK libspdk_event_vhost_scsi.so 00:04:19.839 SYMLINK libspdk_event_iscsi.so 00:04:19.839 SO libspdk.so.6.0 00:04:19.839 SYMLINK libspdk.so 00:04:20.097 CXX app/trace/trace.o 00:04:20.097 TEST_HEADER include/spdk/accel.h 00:04:20.097 CC test/rpc_client/rpc_client_test.o 00:04:20.097 TEST_HEADER include/spdk/accel_module.h 00:04:20.097 TEST_HEADER include/spdk/assert.h 00:04:20.097 TEST_HEADER include/spdk/barrier.h 00:04:20.097 TEST_HEADER include/spdk/base64.h 00:04:20.097 TEST_HEADER include/spdk/bdev.h 00:04:20.097 TEST_HEADER include/spdk/bdev_module.h 00:04:20.097 TEST_HEADER include/spdk/bdev_zone.h 00:04:20.097 TEST_HEADER include/spdk/bit_array.h 00:04:20.097 TEST_HEADER include/spdk/bit_pool.h 00:04:20.097 TEST_HEADER include/spdk/blob_bdev.h 00:04:20.097 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:20.097 TEST_HEADER include/spdk/blobfs.h 00:04:20.097 TEST_HEADER include/spdk/blob.h 00:04:20.097 TEST_HEADER include/spdk/conf.h 00:04:20.097 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:20.097 TEST_HEADER include/spdk/config.h 00:04:20.098 TEST_HEADER include/spdk/cpuset.h 00:04:20.098 TEST_HEADER include/spdk/crc16.h 00:04:20.098 TEST_HEADER include/spdk/crc32.h 00:04:20.098 TEST_HEADER include/spdk/crc64.h 00:04:20.098 TEST_HEADER include/spdk/dif.h 00:04:20.098 TEST_HEADER include/spdk/dma.h 00:04:20.098 TEST_HEADER include/spdk/endian.h 00:04:20.098 TEST_HEADER include/spdk/env_dpdk.h 00:04:20.098 TEST_HEADER include/spdk/env.h 00:04:20.098 TEST_HEADER include/spdk/event.h 00:04:20.098 TEST_HEADER include/spdk/fd_group.h 00:04:20.098 TEST_HEADER include/spdk/fd.h 00:04:20.098 TEST_HEADER include/spdk/file.h 00:04:20.098 TEST_HEADER include/spdk/fsdev.h 00:04:20.098 TEST_HEADER include/spdk/fsdev_module.h 00:04:20.098 TEST_HEADER include/spdk/ftl.h 00:04:20.098 TEST_HEADER include/spdk/gpt_spec.h 00:04:20.098 TEST_HEADER include/spdk/hexlify.h 00:04:20.098 TEST_HEADER include/spdk/histogram_data.h 00:04:20.098 CC examples/util/zipf/zipf.o 00:04:20.098 TEST_HEADER include/spdk/idxd.h 00:04:20.098 TEST_HEADER include/spdk/idxd_spec.h 00:04:20.098 TEST_HEADER include/spdk/init.h 00:04:20.098 TEST_HEADER include/spdk/ioat.h 00:04:20.098 TEST_HEADER include/spdk/ioat_spec.h 00:04:20.098 CC test/thread/poller_perf/poller_perf.o 00:04:20.098 TEST_HEADER include/spdk/iscsi_spec.h 00:04:20.098 CC examples/ioat/perf/perf.o 00:04:20.098 TEST_HEADER include/spdk/json.h 00:04:20.098 TEST_HEADER include/spdk/jsonrpc.h 00:04:20.098 TEST_HEADER include/spdk/keyring.h 00:04:20.098 TEST_HEADER include/spdk/keyring_module.h 00:04:20.098 TEST_HEADER include/spdk/likely.h 00:04:20.098 TEST_HEADER include/spdk/log.h 00:04:20.098 TEST_HEADER include/spdk/lvol.h 00:04:20.098 TEST_HEADER include/spdk/md5.h 00:04:20.098 TEST_HEADER include/spdk/memory.h 00:04:20.098 TEST_HEADER include/spdk/mmio.h 00:04:20.098 TEST_HEADER include/spdk/nbd.h 00:04:20.098 TEST_HEADER include/spdk/net.h 00:04:20.098 TEST_HEADER include/spdk/notify.h 00:04:20.098 TEST_HEADER include/spdk/nvme.h 00:04:20.098 TEST_HEADER include/spdk/nvme_intel.h 00:04:20.098 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:20.098 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:20.098 TEST_HEADER include/spdk/nvme_spec.h 00:04:20.098 TEST_HEADER include/spdk/nvme_zns.h 00:04:20.098 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:20.098 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:20.098 TEST_HEADER include/spdk/nvmf.h 00:04:20.098 TEST_HEADER include/spdk/nvmf_spec.h 00:04:20.098 TEST_HEADER include/spdk/nvmf_transport.h 00:04:20.098 TEST_HEADER include/spdk/opal.h 00:04:20.098 TEST_HEADER include/spdk/opal_spec.h 00:04:20.098 TEST_HEADER include/spdk/pci_ids.h 00:04:20.098 TEST_HEADER include/spdk/pipe.h 00:04:20.098 TEST_HEADER include/spdk/queue.h 00:04:20.098 CC test/dma/test_dma/test_dma.o 00:04:20.098 TEST_HEADER include/spdk/reduce.h 00:04:20.098 CC test/app/bdev_svc/bdev_svc.o 00:04:20.098 TEST_HEADER include/spdk/rpc.h 00:04:20.098 TEST_HEADER include/spdk/scheduler.h 00:04:20.098 TEST_HEADER include/spdk/scsi.h 00:04:20.098 TEST_HEADER include/spdk/scsi_spec.h 00:04:20.098 TEST_HEADER include/spdk/sock.h 00:04:20.098 TEST_HEADER include/spdk/stdinc.h 00:04:20.098 TEST_HEADER include/spdk/string.h 00:04:20.098 TEST_HEADER include/spdk/thread.h 00:04:20.098 TEST_HEADER include/spdk/trace.h 00:04:20.098 TEST_HEADER include/spdk/trace_parser.h 00:04:20.098 TEST_HEADER include/spdk/tree.h 00:04:20.098 TEST_HEADER include/spdk/ublk.h 00:04:20.098 TEST_HEADER include/spdk/util.h 00:04:20.098 TEST_HEADER include/spdk/uuid.h 00:04:20.098 TEST_HEADER include/spdk/version.h 00:04:20.098 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:20.098 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:20.098 TEST_HEADER include/spdk/vhost.h 00:04:20.098 TEST_HEADER include/spdk/vmd.h 00:04:20.098 TEST_HEADER include/spdk/xor.h 00:04:20.098 TEST_HEADER include/spdk/zipf.h 00:04:20.098 CXX test/cpp_headers/accel.o 00:04:20.098 CC test/env/mem_callbacks/mem_callbacks.o 00:04:20.356 LINK rpc_client_test 00:04:20.356 LINK poller_perf 00:04:20.356 LINK zipf 00:04:20.356 LINK interrupt_tgt 00:04:20.356 LINK bdev_svc 00:04:20.356 LINK mem_callbacks 00:04:20.356 CXX test/cpp_headers/accel_module.o 00:04:20.356 LINK ioat_perf 00:04:20.356 LINK spdk_trace 00:04:20.356 CC test/app/histogram_perf/histogram_perf.o 00:04:20.614 CC examples/ioat/verify/verify.o 00:04:20.614 CXX test/cpp_headers/assert.o 00:04:20.614 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:20.614 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:20.614 CC test/env/vtophys/vtophys.o 00:04:20.614 CC test/event/event_perf/event_perf.o 00:04:20.614 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:20.614 LINK histogram_perf 00:04:20.614 CXX test/cpp_headers/barrier.o 00:04:20.614 CC app/trace_record/trace_record.o 00:04:20.614 LINK test_dma 00:04:20.614 LINK vtophys 00:04:20.614 LINK verify 00:04:20.614 LINK event_perf 00:04:20.614 LINK env_dpdk_post_init 00:04:20.872 CXX test/cpp_headers/base64.o 00:04:20.872 LINK spdk_trace_record 00:04:20.872 CXX test/cpp_headers/bdev.o 00:04:20.872 CC app/nvmf_tgt/nvmf_main.o 00:04:20.872 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:20.872 CC test/event/reactor/reactor.o 00:04:20.872 CC test/env/memory/memory_ut.o 00:04:20.872 CXX test/cpp_headers/bdev_module.o 00:04:20.872 LINK nvme_fuzz 00:04:20.872 CC test/env/pci/pci_ut.o 00:04:20.872 LINK nvmf_tgt 00:04:20.872 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:21.131 CC examples/thread/thread/thread_ex.o 00:04:21.131 LINK reactor 00:04:21.131 CXX test/cpp_headers/bdev_zone.o 00:04:21.131 CC app/iscsi_tgt/iscsi_tgt.o 00:04:21.131 CXX test/cpp_headers/bit_array.o 00:04:21.131 CC app/spdk_tgt/spdk_tgt.o 00:04:21.131 CC test/event/reactor_perf/reactor_perf.o 00:04:21.131 CXX test/cpp_headers/bit_pool.o 00:04:21.131 LINK thread 00:04:21.389 CC test/event/app_repeat/app_repeat.o 00:04:21.389 LINK iscsi_tgt 00:04:21.389 LINK spdk_tgt 00:04:21.389 LINK pci_ut 00:04:21.389 CXX test/cpp_headers/blob_bdev.o 00:04:21.389 LINK reactor_perf 00:04:21.389 LINK vhost_fuzz 00:04:21.389 LINK app_repeat 00:04:21.389 CC app/spdk_lspci/spdk_lspci.o 00:04:21.389 CXX test/cpp_headers/blobfs_bdev.o 00:04:21.646 CC test/event/scheduler/scheduler.o 00:04:21.646 CXX test/cpp_headers/blobfs.o 00:04:21.646 CC test/app/jsoncat/jsoncat.o 00:04:21.646 CC app/spdk_nvme_perf/perf.o 00:04:21.646 CC examples/sock/hello_world/hello_sock.o 00:04:21.646 LINK spdk_lspci 00:04:21.646 CXX test/cpp_headers/blob.o 00:04:21.646 LINK memory_ut 00:04:21.646 CXX test/cpp_headers/conf.o 00:04:21.646 LINK jsoncat 00:04:21.646 LINK scheduler 00:04:21.646 CC test/accel/dif/dif.o 00:04:21.646 LINK hello_sock 00:04:21.904 CXX test/cpp_headers/config.o 00:04:21.904 CXX test/cpp_headers/cpuset.o 00:04:21.904 CXX test/cpp_headers/crc16.o 00:04:21.904 CC app/spdk_nvme_identify/identify.o 00:04:21.904 LINK iscsi_fuzz 00:04:21.904 CC examples/vmd/lsvmd/lsvmd.o 00:04:21.904 CXX test/cpp_headers/crc32.o 00:04:21.904 CC examples/vmd/led/led.o 00:04:21.904 CC test/blobfs/mkfs/mkfs.o 00:04:22.162 CC test/lvol/esnap/esnap.o 00:04:22.162 CC test/nvme/aer/aer.o 00:04:22.162 CXX test/cpp_headers/crc64.o 00:04:22.162 LINK lsvmd 00:04:22.162 LINK led 00:04:22.162 LINK mkfs 00:04:22.162 CC test/app/stub/stub.o 00:04:22.162 CXX test/cpp_headers/dif.o 00:04:22.162 LINK spdk_nvme_perf 00:04:22.162 CC app/spdk_nvme_discover/discovery_aer.o 00:04:22.162 CXX test/cpp_headers/dma.o 00:04:22.162 LINK dif 00:04:22.420 LINK stub 00:04:22.420 CC examples/idxd/perf/perf.o 00:04:22.420 LINK aer 00:04:22.420 CXX test/cpp_headers/endian.o 00:04:22.420 CXX test/cpp_headers/env_dpdk.o 00:04:22.420 CC app/spdk_top/spdk_top.o 00:04:22.420 LINK spdk_nvme_discover 00:04:22.420 CXX test/cpp_headers/env.o 00:04:22.420 CC app/vhost/vhost.o 00:04:22.420 CC test/nvme/reset/reset.o 00:04:22.420 CXX test/cpp_headers/event.o 00:04:22.678 LINK vhost 00:04:22.678 LINK idxd_perf 00:04:22.678 LINK spdk_nvme_identify 00:04:22.678 CC app/spdk_dd/spdk_dd.o 00:04:22.678 CXX test/cpp_headers/fd_group.o 00:04:22.678 CXX test/cpp_headers/fd.o 00:04:22.678 LINK reset 00:04:22.678 CC examples/accel/perf/accel_perf.o 00:04:22.678 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:22.678 CXX test/cpp_headers/file.o 00:04:22.935 CC test/nvme/sgl/sgl.o 00:04:22.935 CXX test/cpp_headers/fsdev.o 00:04:22.935 CC app/fio/nvme/fio_plugin.o 00:04:22.935 CC test/nvme/e2edp/nvme_dp.o 00:04:22.935 LINK hello_fsdev 00:04:22.935 CC test/bdev/bdevio/bdevio.o 00:04:22.935 CXX test/cpp_headers/fsdev_module.o 00:04:22.935 LINK spdk_dd 00:04:22.935 LINK sgl 00:04:23.194 LINK accel_perf 00:04:23.194 LINK nvme_dp 00:04:23.194 LINK spdk_top 00:04:23.194 CXX test/cpp_headers/ftl.o 00:04:23.194 CXX test/cpp_headers/gpt_spec.o 00:04:23.194 CXX test/cpp_headers/hexlify.o 00:04:23.194 CC test/nvme/overhead/overhead.o 00:04:23.194 CC test/nvme/err_injection/err_injection.o 00:04:23.194 CC examples/nvme/hello_world/hello_world.o 00:04:23.194 CXX test/cpp_headers/histogram_data.o 00:04:23.452 CC examples/blob/hello_world/hello_blob.o 00:04:23.452 CC examples/nvme/reconnect/reconnect.o 00:04:23.452 LINK bdevio 00:04:23.452 LINK spdk_nvme 00:04:23.452 CXX test/cpp_headers/idxd.o 00:04:23.452 CC examples/blob/cli/blobcli.o 00:04:23.452 LINK overhead 00:04:23.452 LINK hello_world 00:04:23.452 LINK err_injection 00:04:23.452 CXX test/cpp_headers/idxd_spec.o 00:04:23.452 LINK hello_blob 00:04:23.710 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:23.710 CC app/fio/bdev/fio_plugin.o 00:04:23.710 CC test/nvme/startup/startup.o 00:04:23.710 LINK reconnect 00:04:23.710 CXX test/cpp_headers/init.o 00:04:23.710 CC test/nvme/reserve/reserve.o 00:04:23.710 CC test/nvme/simple_copy/simple_copy.o 00:04:23.710 LINK startup 00:04:23.710 CC test/nvme/connect_stress/connect_stress.o 00:04:23.710 CXX test/cpp_headers/ioat.o 00:04:23.710 CC examples/nvme/arbitration/arbitration.o 00:04:23.969 LINK reserve 00:04:23.969 LINK blobcli 00:04:23.969 LINK simple_copy 00:04:23.969 CXX test/cpp_headers/ioat_spec.o 00:04:23.969 LINK connect_stress 00:04:23.969 CC examples/nvme/hotplug/hotplug.o 00:04:23.969 LINK nvme_manage 00:04:23.969 CXX test/cpp_headers/iscsi_spec.o 00:04:23.969 LINK arbitration 00:04:23.969 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:23.969 LINK spdk_bdev 00:04:23.969 CC test/nvme/boot_partition/boot_partition.o 00:04:24.228 CC test/nvme/compliance/nvme_compliance.o 00:04:24.228 CXX test/cpp_headers/json.o 00:04:24.228 LINK hotplug 00:04:24.228 LINK cmb_copy 00:04:24.228 CC examples/nvme/abort/abort.o 00:04:24.228 CC examples/bdev/hello_world/hello_bdev.o 00:04:24.228 LINK boot_partition 00:04:24.228 CC test/nvme/fused_ordering/fused_ordering.o 00:04:24.228 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:24.228 CXX test/cpp_headers/jsonrpc.o 00:04:24.486 CC test/nvme/fdp/fdp.o 00:04:24.486 CC test/nvme/cuse/cuse.o 00:04:24.486 LINK doorbell_aers 00:04:24.486 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:24.486 LINK fused_ordering 00:04:24.486 LINK hello_bdev 00:04:24.486 CXX test/cpp_headers/keyring.o 00:04:24.486 LINK nvme_compliance 00:04:24.486 CXX test/cpp_headers/keyring_module.o 00:04:24.486 LINK pmr_persistence 00:04:24.486 LINK abort 00:04:24.486 CXX test/cpp_headers/likely.o 00:04:24.486 CXX test/cpp_headers/log.o 00:04:24.486 CXX test/cpp_headers/lvol.o 00:04:24.745 CC examples/bdev/bdevperf/bdevperf.o 00:04:24.745 CXX test/cpp_headers/md5.o 00:04:24.745 LINK fdp 00:04:24.745 CXX test/cpp_headers/memory.o 00:04:24.745 CXX test/cpp_headers/mmio.o 00:04:24.745 CXX test/cpp_headers/nbd.o 00:04:24.745 CXX test/cpp_headers/net.o 00:04:24.745 CXX test/cpp_headers/notify.o 00:04:24.745 CXX test/cpp_headers/nvme.o 00:04:24.745 CXX test/cpp_headers/nvme_intel.o 00:04:24.745 CXX test/cpp_headers/nvme_ocssd.o 00:04:24.745 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:24.745 CXX test/cpp_headers/nvme_spec.o 00:04:24.745 CXX test/cpp_headers/nvme_zns.o 00:04:24.745 CXX test/cpp_headers/nvmf_cmd.o 00:04:25.003 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:25.003 CXX test/cpp_headers/nvmf.o 00:04:25.003 CXX test/cpp_headers/nvmf_spec.o 00:04:25.003 CXX test/cpp_headers/nvmf_transport.o 00:04:25.003 CXX test/cpp_headers/opal.o 00:04:25.003 CXX test/cpp_headers/opal_spec.o 00:04:25.003 CXX test/cpp_headers/pci_ids.o 00:04:25.003 CXX test/cpp_headers/pipe.o 00:04:25.003 CXX test/cpp_headers/queue.o 00:04:25.003 CXX test/cpp_headers/reduce.o 00:04:25.003 CXX test/cpp_headers/rpc.o 00:04:25.003 CXX test/cpp_headers/scheduler.o 00:04:25.003 CXX test/cpp_headers/scsi.o 00:04:25.261 CXX test/cpp_headers/scsi_spec.o 00:04:25.261 CXX test/cpp_headers/sock.o 00:04:25.261 CXX test/cpp_headers/stdinc.o 00:04:25.261 CXX test/cpp_headers/string.o 00:04:25.261 CXX test/cpp_headers/thread.o 00:04:25.261 CXX test/cpp_headers/trace.o 00:04:25.261 CXX test/cpp_headers/trace_parser.o 00:04:25.261 CXX test/cpp_headers/tree.o 00:04:25.261 CXX test/cpp_headers/ublk.o 00:04:25.261 CXX test/cpp_headers/util.o 00:04:25.261 CXX test/cpp_headers/uuid.o 00:04:25.261 CXX test/cpp_headers/version.o 00:04:25.261 CXX test/cpp_headers/vfio_user_pci.o 00:04:25.261 CXX test/cpp_headers/vfio_user_spec.o 00:04:25.261 CXX test/cpp_headers/vhost.o 00:04:25.261 CXX test/cpp_headers/vmd.o 00:04:25.523 CXX test/cpp_headers/xor.o 00:04:25.524 LINK bdevperf 00:04:25.524 CXX test/cpp_headers/zipf.o 00:04:25.524 LINK cuse 00:04:25.782 CC examples/nvmf/nvmf/nvmf.o 00:04:26.040 LINK nvmf 00:04:27.437 LINK esnap 00:04:27.698 00:04:27.698 real 1m0.773s 00:04:27.698 user 5m5.064s 00:04:27.698 sys 0m51.787s 00:04:27.698 21:10:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:27.698 21:10:17 make -- common/autotest_common.sh@10 -- $ set +x 00:04:27.698 ************************************ 00:04:27.698 END TEST make 00:04:27.698 ************************************ 00:04:27.698 21:10:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:27.698 21:10:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:27.698 21:10:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:27.698 21:10:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.698 21:10:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:27.698 21:10:17 -- pm/common@44 -- $ pid=5806 00:04:27.698 21:10:17 -- pm/common@50 -- $ kill -TERM 5806 00:04:27.698 21:10:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.698 21:10:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:27.698 21:10:17 -- pm/common@44 -- $ pid=5807 00:04:27.698 21:10:17 -- pm/common@50 -- $ kill -TERM 5807 00:04:27.698 21:10:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:27.698 21:10:17 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:27.698 21:10:17 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.698 21:10:17 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.698 21:10:17 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.698 21:10:17 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.698 21:10:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.698 21:10:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.698 21:10:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.698 21:10:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.698 21:10:17 -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.698 21:10:17 -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.698 21:10:17 -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.698 21:10:17 -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.698 21:10:17 -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.698 21:10:17 -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.698 21:10:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.698 21:10:17 -- scripts/common.sh@344 -- # case "$op" in 00:04:27.698 21:10:17 -- scripts/common.sh@345 -- # : 1 00:04:27.698 21:10:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.698 21:10:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.698 21:10:17 -- scripts/common.sh@365 -- # decimal 1 00:04:27.698 21:10:17 -- scripts/common.sh@353 -- # local d=1 00:04:27.698 21:10:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.698 21:10:17 -- scripts/common.sh@355 -- # echo 1 00:04:27.698 21:10:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.698 21:10:17 -- scripts/common.sh@366 -- # decimal 2 00:04:27.698 21:10:17 -- scripts/common.sh@353 -- # local d=2 00:04:27.698 21:10:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.698 21:10:17 -- scripts/common.sh@355 -- # echo 2 00:04:27.698 21:10:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.698 21:10:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.698 21:10:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.698 21:10:17 -- scripts/common.sh@368 -- # return 0 00:04:27.698 21:10:17 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.698 21:10:17 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.698 --rc genhtml_branch_coverage=1 00:04:27.698 --rc genhtml_function_coverage=1 00:04:27.698 --rc genhtml_legend=1 00:04:27.699 --rc geninfo_all_blocks=1 00:04:27.699 --rc geninfo_unexecuted_blocks=1 00:04:27.699 00:04:27.699 ' 00:04:27.699 21:10:17 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.699 --rc genhtml_branch_coverage=1 00:04:27.699 --rc genhtml_function_coverage=1 00:04:27.699 --rc genhtml_legend=1 00:04:27.699 --rc geninfo_all_blocks=1 00:04:27.699 --rc geninfo_unexecuted_blocks=1 00:04:27.699 00:04:27.699 ' 00:04:27.699 21:10:17 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.699 --rc genhtml_branch_coverage=1 00:04:27.699 --rc genhtml_function_coverage=1 00:04:27.699 --rc genhtml_legend=1 00:04:27.699 --rc geninfo_all_blocks=1 00:04:27.699 --rc geninfo_unexecuted_blocks=1 00:04:27.699 00:04:27.699 ' 00:04:27.699 21:10:17 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.699 --rc genhtml_branch_coverage=1 00:04:27.699 --rc genhtml_function_coverage=1 00:04:27.699 --rc genhtml_legend=1 00:04:27.699 --rc geninfo_all_blocks=1 00:04:27.699 --rc geninfo_unexecuted_blocks=1 00:04:27.699 00:04:27.699 ' 00:04:27.699 21:10:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.699 21:10:17 -- nvmf/common.sh@7 -- # uname -s 00:04:27.699 21:10:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.699 21:10:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.699 21:10:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.699 21:10:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.699 21:10:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.699 21:10:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.699 21:10:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.699 21:10:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.699 21:10:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.699 21:10:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.699 21:10:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:10f3d47c-ff38-4224-8971-148f962d5374 00:04:27.699 21:10:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=10f3d47c-ff38-4224-8971-148f962d5374 00:04:27.699 21:10:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.699 21:10:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.699 21:10:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.699 21:10:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.699 21:10:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.699 21:10:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.699 21:10:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.699 21:10:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.699 21:10:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.699 21:10:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.699 21:10:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.699 21:10:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.699 21:10:17 -- paths/export.sh@5 -- # export PATH 00:04:27.699 21:10:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.699 21:10:17 -- nvmf/common.sh@51 -- # : 0 00:04:27.699 21:10:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.699 21:10:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.699 21:10:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.699 21:10:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.699 21:10:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.699 21:10:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.699 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.699 21:10:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.699 21:10:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.699 21:10:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.699 21:10:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:27.699 21:10:17 -- spdk/autotest.sh@32 -- # uname -s 00:04:27.699 21:10:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:27.699 21:10:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:27.699 21:10:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.699 21:10:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:27.699 21:10:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.699 21:10:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:27.957 21:10:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:27.957 21:10:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:27.957 21:10:17 -- spdk/autotest.sh@48 -- # udevadm_pid=68030 00:04:27.957 21:10:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:27.958 21:10:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:27.958 21:10:17 -- pm/common@17 -- # local monitor 00:04:27.958 21:10:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.958 21:10:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.958 21:10:17 -- pm/common@25 -- # sleep 1 00:04:27.958 21:10:17 -- pm/common@21 -- # date +%s 00:04:27.958 21:10:17 -- pm/common@21 -- # date +%s 00:04:27.958 21:10:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734383417 00:04:27.958 21:10:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734383417 00:04:27.958 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734383417_collect-cpu-load.pm.log 00:04:27.958 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734383417_collect-vmstat.pm.log 00:04:28.897 21:10:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:28.897 21:10:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:28.897 21:10:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.897 21:10:18 -- common/autotest_common.sh@10 -- # set +x 00:04:28.897 21:10:18 -- spdk/autotest.sh@59 -- # create_test_list 00:04:28.897 21:10:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:28.897 21:10:18 -- common/autotest_common.sh@10 -- # set +x 00:04:28.897 21:10:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:28.897 21:10:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:28.897 21:10:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:28.897 21:10:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:28.897 21:10:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:28.897 21:10:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:28.897 21:10:18 -- common/autotest_common.sh@1457 -- # uname 00:04:28.897 21:10:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:28.897 21:10:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:28.897 21:10:18 -- common/autotest_common.sh@1477 -- # uname 00:04:28.897 21:10:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:28.897 21:10:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:28.897 21:10:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:28.897 lcov: LCOV version 1.15 00:04:28.897 21:10:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:43.805 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:43.805 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:01.930 21:10:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:01.930 21:10:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.930 21:10:48 -- common/autotest_common.sh@10 -- # set +x 00:05:01.930 21:10:48 -- spdk/autotest.sh@78 -- # rm -f 00:05:01.930 21:10:48 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.930 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:01.930 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:01.930 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:01.930 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:01.930 21:10:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:01.930 21:10:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:01.930 21:10:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:01.930 21:10:49 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:01.930 21:10:49 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:01.930 21:10:49 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:01.930 21:10:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:01.930 21:10:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:01.930 21:10:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:01.930 21:10:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:01.930 21:10:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:01.930 21:10:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:01.930 21:10:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:01.930 21:10:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:01.930 21:10:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:01.930 21:10:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:01.930 21:10:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:01.930 21:10:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:01.930 21:10:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:01.930 21:10:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:01.930 21:10:49 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:01.930 21:10:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:01.930 21:10:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:01.930 21:10:49 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:01.930 21:10:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.930 21:10:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:01.931 21:10:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:01.931 21:10:49 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:01.931 21:10:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:01.931 21:10:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.931 21:10:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:01.931 21:10:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:01.931 21:10:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:01.931 21:10:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:01.931 21:10:49 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:01.931 21:10:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:01.931 21:10:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.931 21:10:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:01.931 21:10:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.931 21:10:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.931 21:10:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:01.931 21:10:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:01.931 21:10:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:01.931 No valid GPT data, bailing 00:05:01.931 21:10:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.931 21:10:49 -- scripts/common.sh@394 -- # pt= 00:05:01.931 21:10:49 -- scripts/common.sh@395 -- # return 1 00:05:01.931 21:10:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:01.931 1+0 records in 00:05:01.931 1+0 records out 00:05:01.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242592 s, 43.2 MB/s 00:05:01.931 21:10:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.931 21:10:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.931 21:10:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:01.931 21:10:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:01.931 21:10:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:01.931 No valid GPT data, bailing 00:05:01.931 21:10:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:01.931 21:10:49 -- scripts/common.sh@394 -- # pt= 00:05:01.931 21:10:49 -- scripts/common.sh@395 -- # return 1 00:05:01.931 21:10:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:01.931 1+0 records in 00:05:01.931 1+0 records out 00:05:01.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599696 s, 175 MB/s 00:05:01.931 21:10:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.931 21:10:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.931 21:10:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:01.931 21:10:49 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:01.931 21:10:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:01.931 No valid GPT data, bailing 00:05:01.931 21:10:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:01.931 21:10:49 -- scripts/common.sh@394 -- # pt= 00:05:01.931 21:10:49 -- scripts/common.sh@395 -- # return 1 00:05:01.931 21:10:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:01.931 1+0 records in 00:05:01.931 1+0 records out 00:05:01.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063861 s, 164 MB/s 00:05:01.931 21:10:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.931 21:10:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.931 21:10:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:01.931 21:10:49 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:01.931 21:10:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:01.931 No valid GPT data, bailing 00:05:01.931 21:10:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:01.931 21:10:50 -- scripts/common.sh@394 -- # pt= 00:05:01.931 21:10:50 -- scripts/common.sh@395 -- # return 1 00:05:01.931 21:10:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:01.931 1+0 records in 00:05:01.931 1+0 records out 00:05:01.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518261 s, 202 MB/s 00:05:01.931 21:10:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.931 21:10:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.931 21:10:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:01.931 21:10:50 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:01.931 21:10:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:01.931 No valid GPT data, bailing 00:05:01.931 21:10:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:01.931 21:10:50 -- scripts/common.sh@394 -- # pt= 00:05:01.931 21:10:50 -- scripts/common.sh@395 -- # return 1 00:05:01.931 21:10:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:01.931 1+0 records in 00:05:01.931 1+0 records out 00:05:01.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622094 s, 169 MB/s 00:05:01.931 21:10:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.931 21:10:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.931 21:10:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:01.931 21:10:50 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:01.931 21:10:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:01.931 No valid GPT data, bailing 00:05:01.931 21:10:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:01.931 21:10:50 -- scripts/common.sh@394 -- # pt= 00:05:01.931 21:10:50 -- scripts/common.sh@395 -- # return 1 00:05:01.931 21:10:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:01.931 1+0 records in 00:05:01.931 1+0 records out 00:05:01.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651369 s, 161 MB/s 00:05:01.931 21:10:50 -- spdk/autotest.sh@105 -- # sync 00:05:01.931 21:10:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:01.931 21:10:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:01.931 21:10:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:03.319 21:10:52 -- spdk/autotest.sh@111 -- # uname -s 00:05:03.319 21:10:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:03.319 21:10:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:03.319 21:10:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:03.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.842 Hugepages 00:05:03.842 node hugesize free / total 00:05:03.842 node0 1048576kB 0 / 0 00:05:04.103 node0 2048kB 0 / 0 00:05:04.103 00:05:04.103 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.103 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:04.103 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:04.103 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:04.364 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:04.364 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:04.364 21:10:53 -- spdk/autotest.sh@117 -- # uname -s 00:05:04.364 21:10:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:04.364 21:10:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:04.364 21:10:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.514 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.514 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.514 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.514 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.514 21:10:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:06.457 21:10:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:06.457 21:10:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:06.457 21:10:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:06.457 21:10:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:06.457 21:10:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:06.457 21:10:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:06.457 21:10:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.457 21:10:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:06.457 21:10:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:06.457 21:10:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:06.457 21:10:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:06.457 21:10:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.980 Waiting for block devices as requested 00:05:06.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.241 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.241 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.241 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.563 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:12.563 21:11:01 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:12.563 21:11:01 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:12.564 21:11:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.564 21:11:01 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:12.564 21:11:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:12.564 21:11:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:12.564 21:11:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:12.564 21:11:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:12.564 21:11:01 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:12.564 21:11:01 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:12.564 21:11:01 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:12.564 21:11:01 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:12.564 21:11:01 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:12.564 21:11:01 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:12.564 21:11:01 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:12.564 21:11:01 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:12.564 21:11:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:12.564 21:11:01 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:12.564 21:11:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:12.564 21:11:01 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:12.564 21:11:01 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:12.564 21:11:01 -- common/autotest_common.sh@1543 -- # continue 00:05:12.564 21:11:01 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:12.564 21:11:01 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:12.564 21:11:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.564 21:11:01 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:12.564 21:11:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:12.564 21:11:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:12.564 21:11:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:12.564 21:11:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:12.564 21:11:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:12.564 21:11:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:12.564 21:11:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:12.564 21:11:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:12.564 21:11:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1543 -- # continue 00:05:12.564 21:11:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:12.564 21:11:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:12.564 21:11:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:12.564 21:11:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:12.564 21:11:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:12.564 21:11:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:12.564 21:11:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1543 -- # continue 00:05:12.564 21:11:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:12.564 21:11:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:12.564 21:11:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:12.564 21:11:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:12.564 21:11:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:12.564 21:11:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:12.564 21:11:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:12.564 21:11:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:12.564 21:11:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:12.564 21:11:02 -- common/autotest_common.sh@1543 -- # continue 00:05:12.564 21:11:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:12.564 21:11:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.564 21:11:02 -- common/autotest_common.sh@10 -- # set +x 00:05:12.564 21:11:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:12.564 21:11:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.564 21:11:02 -- common/autotest_common.sh@10 -- # set +x 00:05:12.564 21:11:02 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.712 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.712 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.712 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.712 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.712 21:11:03 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:13.712 21:11:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.712 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:13.712 21:11:03 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:13.712 21:11:03 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:13.712 21:11:03 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.712 21:11:03 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:13.712 21:11:03 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:13.712 21:11:03 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:13.712 21:11:03 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:13.712 21:11:03 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.712 21:11:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.712 21:11:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.712 21:11:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.712 21:11:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.712 21:11:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:13.712 21:11:03 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:13.712 21:11:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:13.712 21:11:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:13.712 21:11:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:13.712 21:11:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:13.712 21:11:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.712 21:11:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:13.712 21:11:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:13.712 21:11:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:13.712 21:11:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.712 21:11:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:13.712 21:11:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:13.974 21:11:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:13.974 21:11:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.974 21:11:03 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:13.974 21:11:03 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:13.974 21:11:03 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:13.974 21:11:03 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.974 21:11:03 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:13.974 21:11:03 -- common/autotest_common.sh@1572 -- # return 0 00:05:13.974 21:11:03 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:13.974 21:11:03 -- common/autotest_common.sh@1580 -- # return 0 00:05:13.974 21:11:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:13.974 21:11:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:13.974 21:11:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:13.974 21:11:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:13.974 21:11:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:13.974 21:11:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.974 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 21:11:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:13.974 21:11:03 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.974 21:11:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.974 21:11:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.974 21:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:13.974 ************************************ 00:05:13.974 START TEST env 00:05:13.974 ************************************ 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.974 * Looking for test storage... 00:05:13.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.974 21:11:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.974 21:11:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.974 21:11:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.974 21:11:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.974 21:11:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.974 21:11:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.974 21:11:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.974 21:11:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.974 21:11:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.974 21:11:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.974 21:11:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.974 21:11:03 env -- scripts/common.sh@344 -- # case "$op" in 00:05:13.974 21:11:03 env -- scripts/common.sh@345 -- # : 1 00:05:13.974 21:11:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.974 21:11:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.974 21:11:03 env -- scripts/common.sh@365 -- # decimal 1 00:05:13.974 21:11:03 env -- scripts/common.sh@353 -- # local d=1 00:05:13.974 21:11:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.974 21:11:03 env -- scripts/common.sh@355 -- # echo 1 00:05:13.974 21:11:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.974 21:11:03 env -- scripts/common.sh@366 -- # decimal 2 00:05:13.974 21:11:03 env -- scripts/common.sh@353 -- # local d=2 00:05:13.974 21:11:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.974 21:11:03 env -- scripts/common.sh@355 -- # echo 2 00:05:13.974 21:11:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.974 21:11:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.974 21:11:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.974 21:11:03 env -- scripts/common.sh@368 -- # return 0 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.974 --rc genhtml_branch_coverage=1 00:05:13.974 --rc genhtml_function_coverage=1 00:05:13.974 --rc genhtml_legend=1 00:05:13.974 --rc geninfo_all_blocks=1 00:05:13.974 --rc geninfo_unexecuted_blocks=1 00:05:13.974 00:05:13.974 ' 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.974 --rc genhtml_branch_coverage=1 00:05:13.974 --rc genhtml_function_coverage=1 00:05:13.974 --rc genhtml_legend=1 00:05:13.974 --rc geninfo_all_blocks=1 00:05:13.974 --rc geninfo_unexecuted_blocks=1 00:05:13.974 00:05:13.974 ' 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.974 --rc genhtml_branch_coverage=1 00:05:13.974 --rc genhtml_function_coverage=1 00:05:13.974 --rc genhtml_legend=1 00:05:13.974 --rc geninfo_all_blocks=1 00:05:13.974 --rc geninfo_unexecuted_blocks=1 00:05:13.974 00:05:13.974 ' 00:05:13.974 21:11:03 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.974 --rc genhtml_branch_coverage=1 00:05:13.974 --rc genhtml_function_coverage=1 00:05:13.974 --rc genhtml_legend=1 00:05:13.974 --rc geninfo_all_blocks=1 00:05:13.974 --rc geninfo_unexecuted_blocks=1 00:05:13.974 00:05:13.974 ' 00:05:13.974 21:11:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.975 21:11:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.975 21:11:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.975 21:11:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.975 ************************************ 00:05:13.975 START TEST env_memory 00:05:13.975 ************************************ 00:05:13.975 21:11:03 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.975 00:05:13.975 00:05:13.975 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.975 http://cunit.sourceforge.net/ 00:05:13.975 00:05:13.975 00:05:13.975 Suite: memory 00:05:13.975 Test: alloc and free memory map ...[2024-12-16 21:11:03.668869] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:14.236 passed 00:05:14.236 Test: mem map translation ...[2024-12-16 21:11:03.707955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:14.236 [2024-12-16 21:11:03.708097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:14.236 [2024-12-16 21:11:03.708212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:14.236 [2024-12-16 21:11:03.708251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:14.236 passed 00:05:14.236 Test: mem map registration ...[2024-12-16 21:11:03.776735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:14.236 [2024-12-16 21:11:03.776868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:14.236 passed 00:05:14.236 Test: mem map adjacent registrations ...passed 00:05:14.236 00:05:14.236 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.236 suites 1 1 n/a 0 0 00:05:14.236 tests 4 4 4 0 0 00:05:14.236 asserts 152 152 152 0 n/a 00:05:14.236 00:05:14.236 Elapsed time = 0.233 seconds 00:05:14.236 00:05:14.236 real 0m0.271s 00:05:14.236 user 0m0.239s 00:05:14.236 sys 0m0.023s 00:05:14.236 21:11:03 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.236 ************************************ 00:05:14.236 21:11:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:14.236 END TEST env_memory 00:05:14.236 ************************************ 00:05:14.236 21:11:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.236 21:11:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.236 21:11:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.236 21:11:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.498 ************************************ 00:05:14.498 START TEST env_vtophys 00:05:14.498 ************************************ 00:05:14.498 21:11:03 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.498 EAL: lib.eal log level changed from notice to debug 00:05:14.498 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 1 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 2 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 3 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 4 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 5 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 6 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 7 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 8 as core 0 on socket 0 00:05:14.498 EAL: Detected lcore 9 as core 0 on socket 0 00:05:14.498 EAL: Maximum logical cores by configuration: 128 00:05:14.498 EAL: Detected CPU lcores: 10 00:05:14.498 EAL: Detected NUMA nodes: 1 00:05:14.498 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:14.498 EAL: Detected shared linkage of DPDK 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:14.498 EAL: Registered [vdev] bus. 00:05:14.498 EAL: bus.vdev log level changed from disabled to notice 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:14.498 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:14.498 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:14.498 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:14.498 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.498 EAL: No shared files mode enabled, IPC is disabled 00:05:14.498 EAL: Selected IOVA mode 'PA' 00:05:14.498 EAL: Probing VFIO support... 00:05:14.498 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.498 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:14.498 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.498 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.498 EAL: Setting up physically contiguous memory... 00:05:14.498 EAL: Setting maximum number of open files to 524288 00:05:14.498 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.498 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.498 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.498 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.498 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.498 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.498 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.498 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.498 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.498 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.498 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.498 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.498 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.498 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.498 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.498 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.498 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.498 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.498 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.498 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.498 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.498 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.498 EAL: Hugepages will be freed exactly as allocated. 00:05:14.498 EAL: No shared files mode enabled, IPC is disabled 00:05:14.498 EAL: No shared files mode enabled, IPC is disabled 00:05:14.498 EAL: TSC frequency is ~2600000 KHz 00:05:14.498 EAL: Main lcore 0 is ready (tid=7f4d337eca40;cpuset=[0]) 00:05:14.498 EAL: Trying to obtain current memory policy. 00:05:14.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.498 EAL: Restoring previous memory policy: 0 00:05:14.498 EAL: request: mp_malloc_sync 00:05:14.498 EAL: No shared files mode enabled, IPC is disabled 00:05:14.498 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.498 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.498 EAL: No shared files mode enabled, IPC is disabled 00:05:14.498 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.498 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.499 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:14.499 00:05:14.499 00:05:14.499 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.499 http://cunit.sourceforge.net/ 00:05:14.499 00:05:14.499 00:05:14.499 Suite: components_suite 00:05:15.071 Test: vtophys_malloc_test ...passed 00:05:15.071 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:15.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.071 EAL: Restoring previous memory policy: 4 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was expanded by 4MB 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was shrunk by 4MB 00:05:15.071 EAL: Trying to obtain current memory policy. 00:05:15.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.071 EAL: Restoring previous memory policy: 4 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was expanded by 6MB 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was shrunk by 6MB 00:05:15.071 EAL: Trying to obtain current memory policy. 00:05:15.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.071 EAL: Restoring previous memory policy: 4 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was expanded by 10MB 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was shrunk by 10MB 00:05:15.071 EAL: Trying to obtain current memory policy. 00:05:15.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.071 EAL: Restoring previous memory policy: 4 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was expanded by 18MB 00:05:15.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.071 EAL: request: mp_malloc_sync 00:05:15.071 EAL: No shared files mode enabled, IPC is disabled 00:05:15.071 EAL: Heap on socket 0 was shrunk by 18MB 00:05:15.071 EAL: Trying to obtain current memory policy. 00:05:15.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.071 EAL: Restoring previous memory policy: 4 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was expanded by 34MB 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was shrunk by 34MB 00:05:15.072 EAL: Trying to obtain current memory policy. 00:05:15.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.072 EAL: Restoring previous memory policy: 4 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was expanded by 66MB 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was shrunk by 66MB 00:05:15.072 EAL: Trying to obtain current memory policy. 00:05:15.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.072 EAL: Restoring previous memory policy: 4 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was expanded by 130MB 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.072 EAL: Trying to obtain current memory policy. 00:05:15.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.072 EAL: Restoring previous memory policy: 4 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.072 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.072 EAL: request: mp_malloc_sync 00:05:15.072 EAL: No shared files mode enabled, IPC is disabled 00:05:15.072 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.072 EAL: Trying to obtain current memory policy. 00:05:15.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.333 EAL: Restoring previous memory policy: 4 00:05:15.333 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.333 EAL: request: mp_malloc_sync 00:05:15.333 EAL: No shared files mode enabled, IPC is disabled 00:05:15.333 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.333 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.333 EAL: request: mp_malloc_sync 00:05:15.333 EAL: No shared files mode enabled, IPC is disabled 00:05:15.333 EAL: Heap on socket 0 was shrunk by 514MB 00:05:15.333 EAL: Trying to obtain current memory policy. 00:05:15.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.597 EAL: Restoring previous memory policy: 4 00:05:15.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.597 EAL: request: mp_malloc_sync 00:05:15.597 EAL: No shared files mode enabled, IPC is disabled 00:05:15.597 EAL: Heap on socket 0 was expanded by 1026MB 00:05:15.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.861 passed 00:05:15.861 00:05:15.861 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.861 suites 1 1 n/a 0 0 00:05:15.861 tests 2 2 2 0 0 00:05:15.861 asserts 5379 5379 5379 0 n/a 00:05:15.861 00:05:15.861 Elapsed time = 1.343 seconds 00:05:15.861 EAL: request: mp_malloc_sync 00:05:15.861 EAL: No shared files mode enabled, IPC is disabled 00:05:15.861 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:15.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.861 EAL: request: mp_malloc_sync 00:05:15.861 EAL: No shared files mode enabled, IPC is disabled 00:05:15.861 EAL: Heap on socket 0 was shrunk by 2MB 00:05:15.861 EAL: No shared files mode enabled, IPC is disabled 00:05:15.861 EAL: No shared files mode enabled, IPC is disabled 00:05:15.861 EAL: No shared files mode enabled, IPC is disabled 00:05:15.861 00:05:15.861 real 0m1.573s 00:05:15.861 user 0m0.628s 00:05:15.861 sys 0m0.803s 00:05:15.861 21:11:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.861 21:11:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:15.861 ************************************ 00:05:15.861 END TEST env_vtophys 00:05:15.861 ************************************ 00:05:16.123 21:11:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:16.123 21:11:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.123 21:11:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.123 21:11:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.123 ************************************ 00:05:16.123 START TEST env_pci 00:05:16.123 ************************************ 00:05:16.123 21:11:05 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:16.123 00:05:16.123 00:05:16.123 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.123 http://cunit.sourceforge.net/ 00:05:16.123 00:05:16.123 00:05:16.123 Suite: pci 00:05:16.123 Test: pci_hook ...[2024-12-16 21:11:05.608825] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70793 has claimed it 00:05:16.123 passed 00:05:16.123 00:05:16.123 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.123 suites 1 1 n/a 0 0 00:05:16.123 tests 1 1 1 0 0 00:05:16.123 asserts 25 25 25 0 n/a 00:05:16.123 00:05:16.123 Elapsed time = 0.004 seconds 00:05:16.123 EAL: Cannot find device (10000:00:01.0) 00:05:16.123 EAL: Failed to attach device on primary process 00:05:16.123 ************************************ 00:05:16.123 END TEST env_pci 00:05:16.123 ************************************ 00:05:16.123 00:05:16.123 real 0m0.044s 00:05:16.123 user 0m0.023s 00:05:16.123 sys 0m0.021s 00:05:16.123 21:11:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.123 21:11:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:16.123 21:11:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:16.123 21:11:05 env -- env/env.sh@15 -- # uname 00:05:16.123 21:11:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:16.123 21:11:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:16.123 21:11:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.123 21:11:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:16.123 21:11:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.123 21:11:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.123 ************************************ 00:05:16.123 START TEST env_dpdk_post_init 00:05:16.123 ************************************ 00:05:16.123 21:11:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.123 EAL: Detected CPU lcores: 10 00:05:16.123 EAL: Detected NUMA nodes: 1 00:05:16.123 EAL: Detected shared linkage of DPDK 00:05:16.123 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.123 EAL: Selected IOVA mode 'PA' 00:05:16.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:16.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:16.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:16.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:16.384 Starting DPDK initialization... 00:05:16.384 Starting SPDK post initialization... 00:05:16.384 SPDK NVMe probe 00:05:16.384 Attaching to 0000:00:10.0 00:05:16.384 Attaching to 0000:00:11.0 00:05:16.384 Attaching to 0000:00:12.0 00:05:16.384 Attaching to 0000:00:13.0 00:05:16.384 Attached to 0000:00:13.0 00:05:16.384 Attached to 0000:00:10.0 00:05:16.384 Attached to 0000:00:11.0 00:05:16.384 Attached to 0000:00:12.0 00:05:16.384 Cleaning up... 00:05:16.384 00:05:16.384 real 0m0.233s 00:05:16.384 user 0m0.074s 00:05:16.384 sys 0m0.061s 00:05:16.384 21:11:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.384 21:11:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.384 ************************************ 00:05:16.384 END TEST env_dpdk_post_init 00:05:16.384 ************************************ 00:05:16.384 21:11:05 env -- env/env.sh@26 -- # uname 00:05:16.384 21:11:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:16.384 21:11:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.384 21:11:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.384 21:11:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.384 21:11:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.384 ************************************ 00:05:16.384 START TEST env_mem_callbacks 00:05:16.384 ************************************ 00:05:16.384 21:11:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.384 EAL: Detected CPU lcores: 10 00:05:16.384 EAL: Detected NUMA nodes: 1 00:05:16.384 EAL: Detected shared linkage of DPDK 00:05:16.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.384 EAL: Selected IOVA mode 'PA' 00:05:16.645 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.645 00:05:16.645 00:05:16.645 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.645 http://cunit.sourceforge.net/ 00:05:16.645 00:05:16.645 00:05:16.645 Suite: memory 00:05:16.645 Test: test ... 00:05:16.645 register 0x200000200000 2097152 00:05:16.645 malloc 3145728 00:05:16.645 register 0x200000400000 4194304 00:05:16.645 buf 0x200000500000 len 3145728 PASSED 00:05:16.645 malloc 64 00:05:16.645 buf 0x2000004fff40 len 64 PASSED 00:05:16.645 malloc 4194304 00:05:16.645 register 0x200000800000 6291456 00:05:16.645 buf 0x200000a00000 len 4194304 PASSED 00:05:16.645 free 0x200000500000 3145728 00:05:16.645 free 0x2000004fff40 64 00:05:16.645 unregister 0x200000400000 4194304 PASSED 00:05:16.645 free 0x200000a00000 4194304 00:05:16.645 unregister 0x200000800000 6291456 PASSED 00:05:16.645 malloc 8388608 00:05:16.645 register 0x200000400000 10485760 00:05:16.645 buf 0x200000600000 len 8388608 PASSED 00:05:16.645 free 0x200000600000 8388608 00:05:16.645 unregister 0x200000400000 10485760 PASSED 00:05:16.645 passed 00:05:16.645 00:05:16.645 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.645 suites 1 1 n/a 0 0 00:05:16.645 tests 1 1 1 0 0 00:05:16.645 asserts 15 15 15 0 n/a 00:05:16.645 00:05:16.645 Elapsed time = 0.012 seconds 00:05:16.645 00:05:16.645 real 0m0.162s 00:05:16.645 user 0m0.021s 00:05:16.645 sys 0m0.038s 00:05:16.645 21:11:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.645 ************************************ 00:05:16.645 END TEST env_mem_callbacks 00:05:16.645 ************************************ 00:05:16.645 21:11:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:16.645 00:05:16.645 real 0m2.769s 00:05:16.645 user 0m1.147s 00:05:16.645 sys 0m1.150s 00:05:16.645 ************************************ 00:05:16.645 END TEST env 00:05:16.645 ************************************ 00:05:16.645 21:11:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.645 21:11:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.645 21:11:06 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:16.645 21:11:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.645 21:11:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.645 21:11:06 -- common/autotest_common.sh@10 -- # set +x 00:05:16.645 ************************************ 00:05:16.645 START TEST rpc 00:05:16.645 ************************************ 00:05:16.645 21:11:06 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:16.645 * Looking for test storage... 00:05:16.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:16.906 21:11:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.906 21:11:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.906 21:11:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.906 21:11:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.906 21:11:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.906 21:11:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.906 21:11:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.906 21:11:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.906 21:11:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.906 21:11:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.906 21:11:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.906 21:11:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.906 21:11:06 rpc -- scripts/common.sh@345 -- # : 1 00:05:16.906 21:11:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.906 21:11:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.906 21:11:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.906 21:11:06 rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.906 21:11:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.906 21:11:06 rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.906 21:11:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.906 21:11:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.906 21:11:06 rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.906 21:11:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.906 21:11:06 rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.906 21:11:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.906 21:11:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.906 21:11:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.906 21:11:06 rpc -- scripts/common.sh@368 -- # return 0 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:16.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.906 --rc genhtml_branch_coverage=1 00:05:16.906 --rc genhtml_function_coverage=1 00:05:16.906 --rc genhtml_legend=1 00:05:16.906 --rc geninfo_all_blocks=1 00:05:16.906 --rc geninfo_unexecuted_blocks=1 00:05:16.906 00:05:16.906 ' 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:16.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.906 --rc genhtml_branch_coverage=1 00:05:16.906 --rc genhtml_function_coverage=1 00:05:16.906 --rc genhtml_legend=1 00:05:16.906 --rc geninfo_all_blocks=1 00:05:16.906 --rc geninfo_unexecuted_blocks=1 00:05:16.906 00:05:16.906 ' 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:16.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.906 --rc genhtml_branch_coverage=1 00:05:16.906 --rc genhtml_function_coverage=1 00:05:16.906 --rc genhtml_legend=1 00:05:16.906 --rc geninfo_all_blocks=1 00:05:16.906 --rc geninfo_unexecuted_blocks=1 00:05:16.906 00:05:16.906 ' 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:16.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.906 --rc genhtml_branch_coverage=1 00:05:16.906 --rc genhtml_function_coverage=1 00:05:16.906 --rc genhtml_legend=1 00:05:16.906 --rc geninfo_all_blocks=1 00:05:16.906 --rc geninfo_unexecuted_blocks=1 00:05:16.906 00:05:16.906 ' 00:05:16.906 21:11:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70919 00:05:16.906 21:11:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.906 21:11:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70919 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 70919 ']' 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.906 21:11:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:16.906 21:11:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.906 [2024-12-16 21:11:06.513122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:16.906 [2024-12-16 21:11:06.513273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70919 ] 00:05:17.167 [2024-12-16 21:11:06.659479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.167 [2024-12-16 21:11:06.689250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.167 [2024-12-16 21:11:06.689316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70919' to capture a snapshot of events at runtime. 00:05:17.167 [2024-12-16 21:11:06.689331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:17.167 [2024-12-16 21:11:06.689340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:17.167 [2024-12-16 21:11:06.689352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70919 for offline analysis/debug. 00:05:17.167 [2024-12-16 21:11:06.689794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.737 21:11:07 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.737 21:11:07 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.737 21:11:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.737 21:11:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.737 21:11:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:17.737 21:11:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:17.737 21:11:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.737 21:11:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.737 21:11:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.737 ************************************ 00:05:17.737 START TEST rpc_integrity 00:05:17.737 ************************************ 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:17.737 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.737 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.737 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.737 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.737 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.737 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:17.737 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.737 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.998 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.998 { 00:05:17.998 "name": "Malloc0", 00:05:17.998 "aliases": [ 00:05:17.998 "adc710fb-c969-421e-bf4b-97ff2f62a993" 00:05:17.998 ], 00:05:17.998 "product_name": "Malloc disk", 00:05:17.998 "block_size": 512, 00:05:17.998 "num_blocks": 16384, 00:05:17.998 "uuid": "adc710fb-c969-421e-bf4b-97ff2f62a993", 00:05:17.998 "assigned_rate_limits": { 00:05:17.998 "rw_ios_per_sec": 0, 00:05:17.998 "rw_mbytes_per_sec": 0, 00:05:17.998 "r_mbytes_per_sec": 0, 00:05:17.998 "w_mbytes_per_sec": 0 00:05:17.998 }, 00:05:17.998 "claimed": false, 00:05:17.998 "zoned": false, 00:05:17.998 "supported_io_types": { 00:05:17.998 "read": true, 00:05:17.998 "write": true, 00:05:17.998 "unmap": true, 00:05:17.998 "flush": true, 00:05:17.998 "reset": true, 00:05:17.998 "nvme_admin": false, 00:05:17.998 "nvme_io": false, 00:05:17.998 "nvme_io_md": false, 00:05:17.998 "write_zeroes": true, 00:05:17.998 "zcopy": true, 00:05:17.998 "get_zone_info": false, 00:05:17.998 "zone_management": false, 00:05:17.998 "zone_append": false, 00:05:17.998 "compare": false, 00:05:17.998 "compare_and_write": false, 00:05:17.998 "abort": true, 00:05:17.998 "seek_hole": false, 00:05:17.998 "seek_data": false, 00:05:17.998 "copy": true, 00:05:17.998 "nvme_iov_md": false 00:05:17.998 }, 00:05:17.998 "memory_domains": [ 00:05:17.998 { 00:05:17.998 "dma_device_id": "system", 00:05:17.998 "dma_device_type": 1 00:05:17.998 }, 00:05:17.998 { 00:05:17.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.998 "dma_device_type": 2 00:05:17.998 } 00:05:17.998 ], 00:05:17.998 "driver_specific": {} 00:05:17.998 } 00:05:17.998 ]' 00:05:17.998 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.998 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.998 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.998 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.998 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.998 [2024-12-16 21:11:07.474997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.998 [2024-12-16 21:11:07.475057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.998 [2024-12-16 21:11:07.475082] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:17.998 [2024-12-16 21:11:07.475091] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.998 [2024-12-16 21:11:07.477311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.998 [2024-12-16 21:11:07.477352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.998 Passthru0 00:05:17.998 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.998 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.998 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.998 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.998 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.998 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.998 { 00:05:17.998 "name": "Malloc0", 00:05:17.998 "aliases": [ 00:05:17.998 "adc710fb-c969-421e-bf4b-97ff2f62a993" 00:05:17.998 ], 00:05:17.998 "product_name": "Malloc disk", 00:05:17.998 "block_size": 512, 00:05:17.998 "num_blocks": 16384, 00:05:17.998 "uuid": "adc710fb-c969-421e-bf4b-97ff2f62a993", 00:05:17.998 "assigned_rate_limits": { 00:05:17.998 "rw_ios_per_sec": 0, 00:05:17.998 "rw_mbytes_per_sec": 0, 00:05:17.998 "r_mbytes_per_sec": 0, 00:05:17.998 "w_mbytes_per_sec": 0 00:05:17.998 }, 00:05:17.998 "claimed": true, 00:05:17.998 "claim_type": "exclusive_write", 00:05:17.998 "zoned": false, 00:05:17.998 "supported_io_types": { 00:05:17.998 "read": true, 00:05:17.998 "write": true, 00:05:17.998 "unmap": true, 00:05:17.998 "flush": true, 00:05:17.998 "reset": true, 00:05:17.998 "nvme_admin": false, 00:05:17.998 "nvme_io": false, 00:05:17.998 "nvme_io_md": false, 00:05:17.998 "write_zeroes": true, 00:05:17.998 "zcopy": true, 00:05:17.998 "get_zone_info": false, 00:05:17.998 "zone_management": false, 00:05:17.998 "zone_append": false, 00:05:17.998 "compare": false, 00:05:17.998 "compare_and_write": false, 00:05:17.998 "abort": true, 00:05:17.998 "seek_hole": false, 00:05:17.998 "seek_data": false, 00:05:17.998 "copy": true, 00:05:17.998 "nvme_iov_md": false 00:05:17.998 }, 00:05:17.998 "memory_domains": [ 00:05:17.998 { 00:05:17.998 "dma_device_id": "system", 00:05:17.998 "dma_device_type": 1 00:05:17.998 }, 00:05:17.998 { 00:05:17.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.998 "dma_device_type": 2 00:05:17.998 } 00:05:17.998 ], 00:05:17.998 "driver_specific": {} 00:05:17.998 }, 00:05:17.998 { 00:05:17.998 "name": "Passthru0", 00:05:17.998 "aliases": [ 00:05:17.999 "249c379d-e75e-5f2f-9dd1-f96d45f00da3" 00:05:17.999 ], 00:05:17.999 "product_name": "passthru", 00:05:17.999 "block_size": 512, 00:05:17.999 "num_blocks": 16384, 00:05:17.999 "uuid": "249c379d-e75e-5f2f-9dd1-f96d45f00da3", 00:05:17.999 "assigned_rate_limits": { 00:05:17.999 "rw_ios_per_sec": 0, 00:05:17.999 "rw_mbytes_per_sec": 0, 00:05:17.999 "r_mbytes_per_sec": 0, 00:05:17.999 "w_mbytes_per_sec": 0 00:05:17.999 }, 00:05:17.999 "claimed": false, 00:05:17.999 "zoned": false, 00:05:17.999 "supported_io_types": { 00:05:17.999 "read": true, 00:05:17.999 "write": true, 00:05:17.999 "unmap": true, 00:05:17.999 "flush": true, 00:05:17.999 "reset": true, 00:05:17.999 "nvme_admin": false, 00:05:17.999 "nvme_io": false, 00:05:17.999 "nvme_io_md": false, 00:05:17.999 "write_zeroes": true, 00:05:17.999 "zcopy": true, 00:05:17.999 "get_zone_info": false, 00:05:17.999 "zone_management": false, 00:05:17.999 "zone_append": false, 00:05:17.999 "compare": false, 00:05:17.999 "compare_and_write": false, 00:05:17.999 "abort": true, 00:05:17.999 "seek_hole": false, 00:05:17.999 "seek_data": false, 00:05:17.999 "copy": true, 00:05:17.999 "nvme_iov_md": false 00:05:17.999 }, 00:05:17.999 "memory_domains": [ 00:05:17.999 { 00:05:17.999 "dma_device_id": "system", 00:05:17.999 "dma_device_type": 1 00:05:17.999 }, 00:05:17.999 { 00:05:17.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.999 "dma_device_type": 2 00:05:17.999 } 00:05:17.999 ], 00:05:17.999 "driver_specific": { 00:05:17.999 "passthru": { 00:05:17.999 "name": "Passthru0", 00:05:17.999 "base_bdev_name": "Malloc0" 00:05:17.999 } 00:05:17.999 } 00:05:17.999 } 00:05:17.999 ]' 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.999 ************************************ 00:05:17.999 END TEST rpc_integrity 00:05:17.999 ************************************ 00:05:17.999 21:11:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.999 00:05:17.999 real 0m0.222s 00:05:17.999 user 0m0.130s 00:05:17.999 sys 0m0.027s 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.999 21:11:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 21:11:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.999 21:11:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.999 21:11:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.999 21:11:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 ************************************ 00:05:17.999 START TEST rpc_plugins 00:05:17.999 ************************************ 00:05:17.999 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:17.999 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.999 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.999 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.999 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.999 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.999 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.999 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.999 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.999 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.999 { 00:05:17.999 "name": "Malloc1", 00:05:17.999 "aliases": [ 00:05:17.999 "51e1815c-1e41-438e-9291-02dacb53771b" 00:05:17.999 ], 00:05:17.999 "product_name": "Malloc disk", 00:05:17.999 "block_size": 4096, 00:05:17.999 "num_blocks": 256, 00:05:17.999 "uuid": "51e1815c-1e41-438e-9291-02dacb53771b", 00:05:17.999 "assigned_rate_limits": { 00:05:17.999 "rw_ios_per_sec": 0, 00:05:17.999 "rw_mbytes_per_sec": 0, 00:05:17.999 "r_mbytes_per_sec": 0, 00:05:17.999 "w_mbytes_per_sec": 0 00:05:17.999 }, 00:05:17.999 "claimed": false, 00:05:17.999 "zoned": false, 00:05:17.999 "supported_io_types": { 00:05:17.999 "read": true, 00:05:17.999 "write": true, 00:05:17.999 "unmap": true, 00:05:17.999 "flush": true, 00:05:17.999 "reset": true, 00:05:17.999 "nvme_admin": false, 00:05:17.999 "nvme_io": false, 00:05:17.999 "nvme_io_md": false, 00:05:17.999 "write_zeroes": true, 00:05:17.999 "zcopy": true, 00:05:17.999 "get_zone_info": false, 00:05:17.999 "zone_management": false, 00:05:17.999 "zone_append": false, 00:05:17.999 "compare": false, 00:05:17.999 "compare_and_write": false, 00:05:17.999 "abort": true, 00:05:17.999 "seek_hole": false, 00:05:17.999 "seek_data": false, 00:05:17.999 "copy": true, 00:05:17.999 "nvme_iov_md": false 00:05:17.999 }, 00:05:17.999 "memory_domains": [ 00:05:17.999 { 00:05:17.999 "dma_device_id": "system", 00:05:17.999 "dma_device_type": 1 00:05:17.999 }, 00:05:17.999 { 00:05:17.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.999 "dma_device_type": 2 00:05:17.999 } 00:05:17.999 ], 00:05:17.999 "driver_specific": {} 00:05:17.999 } 00:05:17.999 ]' 00:05:17.999 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:18.260 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:18.260 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.260 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.260 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:18.260 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:18.260 ************************************ 00:05:18.260 END TEST rpc_plugins 00:05:18.260 ************************************ 00:05:18.260 21:11:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:18.260 00:05:18.260 real 0m0.108s 00:05:18.260 user 0m0.061s 00:05:18.260 sys 0m0.010s 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.260 21:11:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.260 21:11:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:18.260 21:11:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.260 21:11:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.260 21:11:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.260 ************************************ 00:05:18.260 START TEST rpc_trace_cmd_test 00:05:18.260 ************************************ 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:18.260 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70919", 00:05:18.260 "tpoint_group_mask": "0x8", 00:05:18.260 "iscsi_conn": { 00:05:18.260 "mask": "0x2", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "scsi": { 00:05:18.260 "mask": "0x4", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "bdev": { 00:05:18.260 "mask": "0x8", 00:05:18.260 "tpoint_mask": "0xffffffffffffffff" 00:05:18.260 }, 00:05:18.260 "nvmf_rdma": { 00:05:18.260 "mask": "0x10", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "nvmf_tcp": { 00:05:18.260 "mask": "0x20", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "ftl": { 00:05:18.260 "mask": "0x40", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "blobfs": { 00:05:18.260 "mask": "0x80", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "dsa": { 00:05:18.260 "mask": "0x200", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "thread": { 00:05:18.260 "mask": "0x400", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "nvme_pcie": { 00:05:18.260 "mask": "0x800", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "iaa": { 00:05:18.260 "mask": "0x1000", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "nvme_tcp": { 00:05:18.260 "mask": "0x2000", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "bdev_nvme": { 00:05:18.260 "mask": "0x4000", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "sock": { 00:05:18.260 "mask": "0x8000", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "blob": { 00:05:18.260 "mask": "0x10000", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "bdev_raid": { 00:05:18.260 "mask": "0x20000", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 }, 00:05:18.260 "scheduler": { 00:05:18.260 "mask": "0x40000", 00:05:18.260 "tpoint_mask": "0x0" 00:05:18.260 } 00:05:18.260 }' 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:18.260 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:18.522 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:18.522 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:18.522 ************************************ 00:05:18.522 END TEST rpc_trace_cmd_test 00:05:18.522 ************************************ 00:05:18.522 21:11:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:18.522 00:05:18.522 real 0m0.174s 00:05:18.522 user 0m0.143s 00:05:18.522 sys 0m0.020s 00:05:18.522 21:11:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.522 21:11:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.522 21:11:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:18.522 21:11:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:18.522 21:11:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:18.522 21:11:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.522 21:11:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.522 21:11:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.522 ************************************ 00:05:18.522 START TEST rpc_daemon_integrity 00:05:18.522 ************************************ 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.522 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.523 { 00:05:18.523 "name": "Malloc2", 00:05:18.523 "aliases": [ 00:05:18.523 "8ba5f9d1-3ac6-4fa9-8525-61a4db44af42" 00:05:18.523 ], 00:05:18.523 "product_name": "Malloc disk", 00:05:18.523 "block_size": 512, 00:05:18.523 "num_blocks": 16384, 00:05:18.523 "uuid": "8ba5f9d1-3ac6-4fa9-8525-61a4db44af42", 00:05:18.523 "assigned_rate_limits": { 00:05:18.523 "rw_ios_per_sec": 0, 00:05:18.523 "rw_mbytes_per_sec": 0, 00:05:18.523 "r_mbytes_per_sec": 0, 00:05:18.523 "w_mbytes_per_sec": 0 00:05:18.523 }, 00:05:18.523 "claimed": false, 00:05:18.523 "zoned": false, 00:05:18.523 "supported_io_types": { 00:05:18.523 "read": true, 00:05:18.523 "write": true, 00:05:18.523 "unmap": true, 00:05:18.523 "flush": true, 00:05:18.523 "reset": true, 00:05:18.523 "nvme_admin": false, 00:05:18.523 "nvme_io": false, 00:05:18.523 "nvme_io_md": false, 00:05:18.523 "write_zeroes": true, 00:05:18.523 "zcopy": true, 00:05:18.523 "get_zone_info": false, 00:05:18.523 "zone_management": false, 00:05:18.523 "zone_append": false, 00:05:18.523 "compare": false, 00:05:18.523 "compare_and_write": false, 00:05:18.523 "abort": true, 00:05:18.523 "seek_hole": false, 00:05:18.523 "seek_data": false, 00:05:18.523 "copy": true, 00:05:18.523 "nvme_iov_md": false 00:05:18.523 }, 00:05:18.523 "memory_domains": [ 00:05:18.523 { 00:05:18.523 "dma_device_id": "system", 00:05:18.523 "dma_device_type": 1 00:05:18.523 }, 00:05:18.523 { 00:05:18.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.523 "dma_device_type": 2 00:05:18.523 } 00:05:18.523 ], 00:05:18.523 "driver_specific": {} 00:05:18.523 } 00:05:18.523 ]' 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.523 [2024-12-16 21:11:08.155256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.523 [2024-12-16 21:11:08.155303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.523 [2024-12-16 21:11:08.155325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:18.523 [2024-12-16 21:11:08.155333] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.523 [2024-12-16 21:11:08.157510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.523 [2024-12-16 21:11:08.157544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.523 Passthru0 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.523 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.523 { 00:05:18.523 "name": "Malloc2", 00:05:18.523 "aliases": [ 00:05:18.523 "8ba5f9d1-3ac6-4fa9-8525-61a4db44af42" 00:05:18.523 ], 00:05:18.523 "product_name": "Malloc disk", 00:05:18.523 "block_size": 512, 00:05:18.523 "num_blocks": 16384, 00:05:18.523 "uuid": "8ba5f9d1-3ac6-4fa9-8525-61a4db44af42", 00:05:18.523 "assigned_rate_limits": { 00:05:18.523 "rw_ios_per_sec": 0, 00:05:18.523 "rw_mbytes_per_sec": 0, 00:05:18.523 "r_mbytes_per_sec": 0, 00:05:18.523 "w_mbytes_per_sec": 0 00:05:18.523 }, 00:05:18.523 "claimed": true, 00:05:18.523 "claim_type": "exclusive_write", 00:05:18.523 "zoned": false, 00:05:18.523 "supported_io_types": { 00:05:18.523 "read": true, 00:05:18.523 "write": true, 00:05:18.523 "unmap": true, 00:05:18.523 "flush": true, 00:05:18.523 "reset": true, 00:05:18.523 "nvme_admin": false, 00:05:18.523 "nvme_io": false, 00:05:18.523 "nvme_io_md": false, 00:05:18.523 "write_zeroes": true, 00:05:18.523 "zcopy": true, 00:05:18.523 "get_zone_info": false, 00:05:18.523 "zone_management": false, 00:05:18.523 "zone_append": false, 00:05:18.523 "compare": false, 00:05:18.523 "compare_and_write": false, 00:05:18.523 "abort": true, 00:05:18.523 "seek_hole": false, 00:05:18.523 "seek_data": false, 00:05:18.523 "copy": true, 00:05:18.523 "nvme_iov_md": false 00:05:18.523 }, 00:05:18.523 "memory_domains": [ 00:05:18.523 { 00:05:18.523 "dma_device_id": "system", 00:05:18.523 "dma_device_type": 1 00:05:18.523 }, 00:05:18.523 { 00:05:18.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.523 "dma_device_type": 2 00:05:18.523 } 00:05:18.523 ], 00:05:18.523 "driver_specific": {} 00:05:18.523 }, 00:05:18.523 { 00:05:18.523 "name": "Passthru0", 00:05:18.523 "aliases": [ 00:05:18.523 "dd3d5f5b-43dc-5cbc-9bfd-21c30b02e156" 00:05:18.523 ], 00:05:18.523 "product_name": "passthru", 00:05:18.523 "block_size": 512, 00:05:18.523 "num_blocks": 16384, 00:05:18.523 "uuid": "dd3d5f5b-43dc-5cbc-9bfd-21c30b02e156", 00:05:18.523 "assigned_rate_limits": { 00:05:18.523 "rw_ios_per_sec": 0, 00:05:18.523 "rw_mbytes_per_sec": 0, 00:05:18.523 "r_mbytes_per_sec": 0, 00:05:18.523 "w_mbytes_per_sec": 0 00:05:18.523 }, 00:05:18.523 "claimed": false, 00:05:18.523 "zoned": false, 00:05:18.523 "supported_io_types": { 00:05:18.523 "read": true, 00:05:18.523 "write": true, 00:05:18.523 "unmap": true, 00:05:18.524 "flush": true, 00:05:18.524 "reset": true, 00:05:18.524 "nvme_admin": false, 00:05:18.524 "nvme_io": false, 00:05:18.524 "nvme_io_md": false, 00:05:18.524 "write_zeroes": true, 00:05:18.524 "zcopy": true, 00:05:18.524 "get_zone_info": false, 00:05:18.524 "zone_management": false, 00:05:18.524 "zone_append": false, 00:05:18.524 "compare": false, 00:05:18.524 "compare_and_write": false, 00:05:18.524 "abort": true, 00:05:18.524 "seek_hole": false, 00:05:18.524 "seek_data": false, 00:05:18.524 "copy": true, 00:05:18.524 "nvme_iov_md": false 00:05:18.524 }, 00:05:18.524 "memory_domains": [ 00:05:18.524 { 00:05:18.524 "dma_device_id": "system", 00:05:18.524 "dma_device_type": 1 00:05:18.524 }, 00:05:18.524 { 00:05:18.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.524 "dma_device_type": 2 00:05:18.524 } 00:05:18.524 ], 00:05:18.524 "driver_specific": { 00:05:18.524 "passthru": { 00:05:18.524 "name": "Passthru0", 00:05:18.524 "base_bdev_name": "Malloc2" 00:05:18.524 } 00:05:18.524 } 00:05:18.524 } 00:05:18.524 ]' 00:05:18.524 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.524 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.524 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.524 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.524 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.785 ************************************ 00:05:18.785 END TEST rpc_daemon_integrity 00:05:18.785 ************************************ 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.785 00:05:18.785 real 0m0.228s 00:05:18.785 user 0m0.136s 00:05:18.785 sys 0m0.024s 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.785 21:11:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.785 21:11:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.785 21:11:08 rpc -- rpc/rpc.sh@84 -- # killprocess 70919 00:05:18.785 21:11:08 rpc -- common/autotest_common.sh@954 -- # '[' -z 70919 ']' 00:05:18.785 21:11:08 rpc -- common/autotest_common.sh@958 -- # kill -0 70919 00:05:18.785 21:11:08 rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.785 21:11:08 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.785 21:11:08 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70919 00:05:18.785 killing process with pid 70919 00:05:18.786 21:11:08 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.786 21:11:08 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.786 21:11:08 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70919' 00:05:18.786 21:11:08 rpc -- common/autotest_common.sh@973 -- # kill 70919 00:05:18.786 21:11:08 rpc -- common/autotest_common.sh@978 -- # wait 70919 00:05:19.047 00:05:19.047 real 0m2.319s 00:05:19.047 user 0m2.723s 00:05:19.047 sys 0m0.614s 00:05:19.047 21:11:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.047 ************************************ 00:05:19.047 END TEST rpc 00:05:19.047 ************************************ 00:05:19.047 21:11:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.047 21:11:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:19.047 21:11:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.047 21:11:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.047 21:11:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.047 ************************************ 00:05:19.047 START TEST skip_rpc 00:05:19.047 ************************************ 00:05:19.047 21:11:08 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:19.047 * Looking for test storage... 00:05:19.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:19.047 21:11:08 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.047 21:11:08 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.047 21:11:08 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.308 21:11:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.308 --rc genhtml_branch_coverage=1 00:05:19.308 --rc genhtml_function_coverage=1 00:05:19.308 --rc genhtml_legend=1 00:05:19.308 --rc geninfo_all_blocks=1 00:05:19.308 --rc geninfo_unexecuted_blocks=1 00:05:19.308 00:05:19.308 ' 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.308 --rc genhtml_branch_coverage=1 00:05:19.308 --rc genhtml_function_coverage=1 00:05:19.308 --rc genhtml_legend=1 00:05:19.308 --rc geninfo_all_blocks=1 00:05:19.308 --rc geninfo_unexecuted_blocks=1 00:05:19.308 00:05:19.308 ' 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.308 --rc genhtml_branch_coverage=1 00:05:19.308 --rc genhtml_function_coverage=1 00:05:19.308 --rc genhtml_legend=1 00:05:19.308 --rc geninfo_all_blocks=1 00:05:19.308 --rc geninfo_unexecuted_blocks=1 00:05:19.308 00:05:19.308 ' 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.308 --rc genhtml_branch_coverage=1 00:05:19.308 --rc genhtml_function_coverage=1 00:05:19.308 --rc genhtml_legend=1 00:05:19.308 --rc geninfo_all_blocks=1 00:05:19.308 --rc geninfo_unexecuted_blocks=1 00:05:19.308 00:05:19.308 ' 00:05:19.308 21:11:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.308 21:11:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:19.308 21:11:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.308 21:11:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.308 ************************************ 00:05:19.308 START TEST skip_rpc 00:05:19.308 ************************************ 00:05:19.308 21:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:19.308 21:11:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71121 00:05:19.308 21:11:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.308 21:11:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:19.308 21:11:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:19.308 [2024-12-16 21:11:08.876048] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:19.308 [2024-12-16 21:11:08.876160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71121 ] 00:05:19.569 [2024-12-16 21:11:09.021711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.569 [2024-12-16 21:11:09.041436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71121 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71121 ']' 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71121 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71121 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.859 killing process with pid 71121 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71121' 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71121 00:05:24.859 21:11:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71121 00:05:24.859 00:05:24.859 real 0m5.254s 00:05:24.859 user 0m4.904s 00:05:24.859 sys 0m0.250s 00:05:24.859 21:11:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.859 ************************************ 00:05:24.859 END TEST skip_rpc 00:05:24.859 ************************************ 00:05:24.859 21:11:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 21:11:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:24.859 21:11:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.859 21:11:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.859 21:11:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 ************************************ 00:05:24.859 START TEST skip_rpc_with_json 00:05:24.859 ************************************ 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71203 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71203 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71203 ']' 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.859 21:11:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 [2024-12-16 21:11:14.176605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:24.859 [2024-12-16 21:11:14.176733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71203 ] 00:05:24.859 [2024-12-16 21:11:14.317373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.859 [2024-12-16 21:11:14.336449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.432 [2024-12-16 21:11:15.030659] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:25.432 request: 00:05:25.432 { 00:05:25.432 "trtype": "tcp", 00:05:25.432 "method": "nvmf_get_transports", 00:05:25.432 "req_id": 1 00:05:25.432 } 00:05:25.432 Got JSON-RPC error response 00:05:25.432 response: 00:05:25.432 { 00:05:25.432 "code": -19, 00:05:25.432 "message": "No such device" 00:05:25.432 } 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.432 [2024-12-16 21:11:15.038815] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.432 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:25.694 { 00:05:25.694 "subsystems": [ 00:05:25.694 { 00:05:25.694 "subsystem": "fsdev", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "fsdev_set_opts", 00:05:25.694 "params": { 00:05:25.694 "fsdev_io_pool_size": 65535, 00:05:25.694 "fsdev_io_cache_size": 256 00:05:25.694 } 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "keyring", 00:05:25.694 "config": [] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "iobuf", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "iobuf_set_options", 00:05:25.694 "params": { 00:05:25.694 "small_pool_count": 8192, 00:05:25.694 "large_pool_count": 1024, 00:05:25.694 "small_bufsize": 8192, 00:05:25.694 "large_bufsize": 135168, 00:05:25.694 "enable_numa": false 00:05:25.694 } 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "sock", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "sock_set_default_impl", 00:05:25.694 "params": { 00:05:25.694 "impl_name": "posix" 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "sock_impl_set_options", 00:05:25.694 "params": { 00:05:25.694 "impl_name": "ssl", 00:05:25.694 "recv_buf_size": 4096, 00:05:25.694 "send_buf_size": 4096, 00:05:25.694 "enable_recv_pipe": true, 00:05:25.694 "enable_quickack": false, 00:05:25.694 "enable_placement_id": 0, 00:05:25.694 "enable_zerocopy_send_server": true, 00:05:25.694 "enable_zerocopy_send_client": false, 00:05:25.694 "zerocopy_threshold": 0, 00:05:25.694 "tls_version": 0, 00:05:25.694 "enable_ktls": false 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "sock_impl_set_options", 00:05:25.694 "params": { 00:05:25.694 "impl_name": "posix", 00:05:25.694 "recv_buf_size": 2097152, 00:05:25.694 "send_buf_size": 2097152, 00:05:25.694 "enable_recv_pipe": true, 00:05:25.694 "enable_quickack": false, 00:05:25.694 "enable_placement_id": 0, 00:05:25.694 "enable_zerocopy_send_server": true, 00:05:25.694 "enable_zerocopy_send_client": false, 00:05:25.694 "zerocopy_threshold": 0, 00:05:25.694 "tls_version": 0, 00:05:25.694 "enable_ktls": false 00:05:25.694 } 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "vmd", 00:05:25.694 "config": [] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "accel", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "accel_set_options", 00:05:25.694 "params": { 00:05:25.694 "small_cache_size": 128, 00:05:25.694 "large_cache_size": 16, 00:05:25.694 "task_count": 2048, 00:05:25.694 "sequence_count": 2048, 00:05:25.694 "buf_count": 2048 00:05:25.694 } 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "bdev", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "bdev_set_options", 00:05:25.694 "params": { 00:05:25.694 "bdev_io_pool_size": 65535, 00:05:25.694 "bdev_io_cache_size": 256, 00:05:25.694 "bdev_auto_examine": true, 00:05:25.694 "iobuf_small_cache_size": 128, 00:05:25.694 "iobuf_large_cache_size": 16 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "bdev_raid_set_options", 00:05:25.694 "params": { 00:05:25.694 "process_window_size_kb": 1024, 00:05:25.694 "process_max_bandwidth_mb_sec": 0 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "bdev_iscsi_set_options", 00:05:25.694 "params": { 00:05:25.694 "timeout_sec": 30 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "bdev_nvme_set_options", 00:05:25.694 "params": { 00:05:25.694 "action_on_timeout": "none", 00:05:25.694 "timeout_us": 0, 00:05:25.694 "timeout_admin_us": 0, 00:05:25.694 "keep_alive_timeout_ms": 10000, 00:05:25.694 "arbitration_burst": 0, 00:05:25.694 "low_priority_weight": 0, 00:05:25.694 "medium_priority_weight": 0, 00:05:25.694 "high_priority_weight": 0, 00:05:25.694 "nvme_adminq_poll_period_us": 10000, 00:05:25.694 "nvme_ioq_poll_period_us": 0, 00:05:25.694 "io_queue_requests": 0, 00:05:25.694 "delay_cmd_submit": true, 00:05:25.694 "transport_retry_count": 4, 00:05:25.694 "bdev_retry_count": 3, 00:05:25.694 "transport_ack_timeout": 0, 00:05:25.694 "ctrlr_loss_timeout_sec": 0, 00:05:25.694 "reconnect_delay_sec": 0, 00:05:25.694 "fast_io_fail_timeout_sec": 0, 00:05:25.694 "disable_auto_failback": false, 00:05:25.694 "generate_uuids": false, 00:05:25.694 "transport_tos": 0, 00:05:25.694 "nvme_error_stat": false, 00:05:25.694 "rdma_srq_size": 0, 00:05:25.694 "io_path_stat": false, 00:05:25.694 "allow_accel_sequence": false, 00:05:25.694 "rdma_max_cq_size": 0, 00:05:25.694 "rdma_cm_event_timeout_ms": 0, 00:05:25.694 "dhchap_digests": [ 00:05:25.694 "sha256", 00:05:25.694 "sha384", 00:05:25.694 "sha512" 00:05:25.694 ], 00:05:25.694 "dhchap_dhgroups": [ 00:05:25.694 "null", 00:05:25.694 "ffdhe2048", 00:05:25.694 "ffdhe3072", 00:05:25.694 "ffdhe4096", 00:05:25.694 "ffdhe6144", 00:05:25.694 "ffdhe8192" 00:05:25.694 ], 00:05:25.694 "rdma_umr_per_io": false 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "bdev_nvme_set_hotplug", 00:05:25.694 "params": { 00:05:25.694 "period_us": 100000, 00:05:25.694 "enable": false 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "bdev_wait_for_examine" 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "scsi", 00:05:25.694 "config": null 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "scheduler", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "framework_set_scheduler", 00:05:25.694 "params": { 00:05:25.694 "name": "static" 00:05:25.694 } 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "vhost_scsi", 00:05:25.694 "config": [] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "vhost_blk", 00:05:25.694 "config": [] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "ublk", 00:05:25.694 "config": [] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "nbd", 00:05:25.694 "config": [] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "nvmf", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "nvmf_set_config", 00:05:25.694 "params": { 00:05:25.694 "discovery_filter": "match_any", 00:05:25.694 "admin_cmd_passthru": { 00:05:25.694 "identify_ctrlr": false 00:05:25.694 }, 00:05:25.694 "dhchap_digests": [ 00:05:25.694 "sha256", 00:05:25.694 "sha384", 00:05:25.694 "sha512" 00:05:25.694 ], 00:05:25.694 "dhchap_dhgroups": [ 00:05:25.694 "null", 00:05:25.694 "ffdhe2048", 00:05:25.694 "ffdhe3072", 00:05:25.694 "ffdhe4096", 00:05:25.694 "ffdhe6144", 00:05:25.694 "ffdhe8192" 00:05:25.694 ] 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "nvmf_set_max_subsystems", 00:05:25.694 "params": { 00:05:25.694 "max_subsystems": 1024 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "nvmf_set_crdt", 00:05:25.694 "params": { 00:05:25.694 "crdt1": 0, 00:05:25.694 "crdt2": 0, 00:05:25.694 "crdt3": 0 00:05:25.694 } 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "method": "nvmf_create_transport", 00:05:25.694 "params": { 00:05:25.694 "trtype": "TCP", 00:05:25.694 "max_queue_depth": 128, 00:05:25.694 "max_io_qpairs_per_ctrlr": 127, 00:05:25.694 "in_capsule_data_size": 4096, 00:05:25.694 "max_io_size": 131072, 00:05:25.694 "io_unit_size": 131072, 00:05:25.694 "max_aq_depth": 128, 00:05:25.694 "num_shared_buffers": 511, 00:05:25.694 "buf_cache_size": 4294967295, 00:05:25.694 "dif_insert_or_strip": false, 00:05:25.694 "zcopy": false, 00:05:25.694 "c2h_success": true, 00:05:25.694 "sock_priority": 0, 00:05:25.694 "abort_timeout_sec": 1, 00:05:25.694 "ack_timeout": 0, 00:05:25.694 "data_wr_pool_size": 0 00:05:25.694 } 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 }, 00:05:25.694 { 00:05:25.694 "subsystem": "iscsi", 00:05:25.694 "config": [ 00:05:25.694 { 00:05:25.694 "method": "iscsi_set_options", 00:05:25.694 "params": { 00:05:25.694 "node_base": "iqn.2016-06.io.spdk", 00:05:25.694 "max_sessions": 128, 00:05:25.694 "max_connections_per_session": 2, 00:05:25.694 "max_queue_depth": 64, 00:05:25.694 "default_time2wait": 2, 00:05:25.694 "default_time2retain": 20, 00:05:25.694 "first_burst_length": 8192, 00:05:25.694 "immediate_data": true, 00:05:25.694 "allow_duplicated_isid": false, 00:05:25.694 "error_recovery_level": 0, 00:05:25.694 "nop_timeout": 60, 00:05:25.694 "nop_in_interval": 30, 00:05:25.694 "disable_chap": false, 00:05:25.694 "require_chap": false, 00:05:25.694 "mutual_chap": false, 00:05:25.694 "chap_group": 0, 00:05:25.694 "max_large_datain_per_connection": 64, 00:05:25.694 "max_r2t_per_connection": 4, 00:05:25.694 "pdu_pool_size": 36864, 00:05:25.694 "immediate_data_pool_size": 16384, 00:05:25.694 "data_out_pool_size": 2048 00:05:25.694 } 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 } 00:05:25.694 ] 00:05:25.694 } 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71203 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71203 ']' 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71203 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71203 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71203' 00:05:25.694 killing process with pid 71203 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71203 00:05:25.694 21:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71203 00:05:25.955 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71231 00:05:25.955 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:25.955 21:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71231 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71231 ']' 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71231 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71231 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.239 killing process with pid 71231 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71231' 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71231 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71231 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.239 00:05:31.239 real 0m6.640s 00:05:31.239 user 0m6.314s 00:05:31.239 sys 0m0.556s 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.239 ************************************ 00:05:31.239 END TEST skip_rpc_with_json 00:05:31.239 ************************************ 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.239 21:11:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.239 21:11:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.239 21:11:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.239 21:11:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.239 ************************************ 00:05:31.239 START TEST skip_rpc_with_delay 00:05:31.239 ************************************ 00:05:31.239 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.240 [2024-12-16 21:11:20.864532] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.240 00:05:31.240 real 0m0.119s 00:05:31.240 user 0m0.064s 00:05:31.240 sys 0m0.053s 00:05:31.240 ************************************ 00:05:31.240 END TEST skip_rpc_with_delay 00:05:31.240 ************************************ 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.240 21:11:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:31.499 21:11:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.499 21:11:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.499 21:11:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.499 21:11:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.499 21:11:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.499 21:11:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.499 ************************************ 00:05:31.499 START TEST exit_on_failed_rpc_init 00:05:31.499 ************************************ 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71343 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71343 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71343 ']' 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.499 21:11:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.499 [2024-12-16 21:11:21.037501] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:31.499 [2024-12-16 21:11:21.037668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71343 ] 00:05:31.499 [2024-12-16 21:11:21.179582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.499 [2024-12-16 21:11:21.196293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:32.435 21:11:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.435 [2024-12-16 21:11:21.946104] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:32.435 [2024-12-16 21:11:21.946211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71355 ] 00:05:32.435 [2024-12-16 21:11:22.088570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.435 [2024-12-16 21:11:22.106150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.435 [2024-12-16 21:11:22.106223] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.435 [2024-12-16 21:11:22.106238] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.435 [2024-12-16 21:11:22.106247] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71343 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71343 ']' 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71343 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71343 00:05:32.695 killing process with pid 71343 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71343' 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71343 00:05:32.695 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71343 00:05:32.956 ************************************ 00:05:32.956 END TEST exit_on_failed_rpc_init 00:05:32.956 ************************************ 00:05:32.956 00:05:32.956 real 0m1.469s 00:05:32.956 user 0m1.622s 00:05:32.956 sys 0m0.355s 00:05:32.956 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.956 21:11:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.956 21:11:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.956 00:05:32.956 real 0m13.822s 00:05:32.956 user 0m13.035s 00:05:32.956 sys 0m1.393s 00:05:32.956 21:11:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.956 ************************************ 00:05:32.956 END TEST skip_rpc 00:05:32.956 ************************************ 00:05:32.956 21:11:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.956 21:11:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:32.956 21:11:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.956 21:11:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.956 21:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:32.956 ************************************ 00:05:32.956 START TEST rpc_client 00:05:32.956 ************************************ 00:05:32.956 21:11:22 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:32.956 * Looking for test storage... 00:05:32.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:32.956 21:11:22 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.956 21:11:22 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.956 21:11:22 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.217 21:11:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.217 --rc genhtml_branch_coverage=1 00:05:33.217 --rc genhtml_function_coverage=1 00:05:33.217 --rc genhtml_legend=1 00:05:33.217 --rc geninfo_all_blocks=1 00:05:33.217 --rc geninfo_unexecuted_blocks=1 00:05:33.217 00:05:33.217 ' 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.217 --rc genhtml_branch_coverage=1 00:05:33.217 --rc genhtml_function_coverage=1 00:05:33.217 --rc genhtml_legend=1 00:05:33.217 --rc geninfo_all_blocks=1 00:05:33.217 --rc geninfo_unexecuted_blocks=1 00:05:33.217 00:05:33.217 ' 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.217 --rc genhtml_branch_coverage=1 00:05:33.217 --rc genhtml_function_coverage=1 00:05:33.217 --rc genhtml_legend=1 00:05:33.217 --rc geninfo_all_blocks=1 00:05:33.217 --rc geninfo_unexecuted_blocks=1 00:05:33.217 00:05:33.217 ' 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.217 --rc genhtml_branch_coverage=1 00:05:33.217 --rc genhtml_function_coverage=1 00:05:33.217 --rc genhtml_legend=1 00:05:33.217 --rc geninfo_all_blocks=1 00:05:33.217 --rc geninfo_unexecuted_blocks=1 00:05:33.217 00:05:33.217 ' 00:05:33.217 21:11:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:33.217 OK 00:05:33.217 21:11:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.217 00:05:33.217 real 0m0.180s 00:05:33.217 user 0m0.095s 00:05:33.217 sys 0m0.090s 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.217 21:11:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.217 ************************************ 00:05:33.217 END TEST rpc_client 00:05:33.217 ************************************ 00:05:33.217 21:11:22 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.218 21:11:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.218 21:11:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.218 21:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.218 ************************************ 00:05:33.218 START TEST json_config 00:05:33.218 ************************************ 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.218 21:11:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.218 21:11:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.218 21:11:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.218 21:11:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.218 21:11:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.218 21:11:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.218 21:11:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.218 21:11:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:33.218 21:11:22 json_config -- scripts/common.sh@345 -- # : 1 00:05:33.218 21:11:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.218 21:11:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.218 21:11:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:33.218 21:11:22 json_config -- scripts/common.sh@353 -- # local d=1 00:05:33.218 21:11:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.218 21:11:22 json_config -- scripts/common.sh@355 -- # echo 1 00:05:33.218 21:11:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.218 21:11:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@353 -- # local d=2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.218 21:11:22 json_config -- scripts/common.sh@355 -- # echo 2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.218 21:11:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.218 21:11:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.218 21:11:22 json_config -- scripts/common.sh@368 -- # return 0 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.218 --rc genhtml_branch_coverage=1 00:05:33.218 --rc genhtml_function_coverage=1 00:05:33.218 --rc genhtml_legend=1 00:05:33.218 --rc geninfo_all_blocks=1 00:05:33.218 --rc geninfo_unexecuted_blocks=1 00:05:33.218 00:05:33.218 ' 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.218 --rc genhtml_branch_coverage=1 00:05:33.218 --rc genhtml_function_coverage=1 00:05:33.218 --rc genhtml_legend=1 00:05:33.218 --rc geninfo_all_blocks=1 00:05:33.218 --rc geninfo_unexecuted_blocks=1 00:05:33.218 00:05:33.218 ' 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.218 --rc genhtml_branch_coverage=1 00:05:33.218 --rc genhtml_function_coverage=1 00:05:33.218 --rc genhtml_legend=1 00:05:33.218 --rc geninfo_all_blocks=1 00:05:33.218 --rc geninfo_unexecuted_blocks=1 00:05:33.218 00:05:33.218 ' 00:05:33.218 21:11:22 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.218 --rc genhtml_branch_coverage=1 00:05:33.218 --rc genhtml_function_coverage=1 00:05:33.218 --rc genhtml_legend=1 00:05:33.218 --rc geninfo_all_blocks=1 00:05:33.218 --rc geninfo_unexecuted_blocks=1 00:05:33.218 00:05:33.218 ' 00:05:33.218 21:11:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:10f3d47c-ff38-4224-8971-148f962d5374 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=10f3d47c-ff38-4224-8971-148f962d5374 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:33.218 21:11:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.218 21:11:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.218 21:11:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.218 21:11:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.218 21:11:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.218 21:11:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.218 21:11:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.218 21:11:22 json_config -- paths/export.sh@5 -- # export PATH 00:05:33.218 21:11:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@51 -- # : 0 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.218 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.218 21:11:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.218 21:11:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:33.218 21:11:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.218 21:11:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.218 21:11:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.218 21:11:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.218 21:11:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:33.218 WARNING: No tests are enabled so not running JSON configuration tests 00:05:33.219 21:11:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:33.219 00:05:33.219 real 0m0.148s 00:05:33.219 user 0m0.092s 00:05:33.219 sys 0m0.055s 00:05:33.219 21:11:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.219 21:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.219 ************************************ 00:05:33.219 END TEST json_config 00:05:33.219 ************************************ 00:05:33.481 21:11:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.481 21:11:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.481 21:11:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.481 21:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.481 ************************************ 00:05:33.481 START TEST json_config_extra_key 00:05:33.481 ************************************ 00:05:33.481 21:11:22 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.481 21:11:22 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.481 21:11:22 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.481 21:11:22 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.481 21:11:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.481 21:11:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:33.481 21:11:23 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.481 21:11:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.481 --rc genhtml_branch_coverage=1 00:05:33.481 --rc genhtml_function_coverage=1 00:05:33.481 --rc genhtml_legend=1 00:05:33.481 --rc geninfo_all_blocks=1 00:05:33.481 --rc geninfo_unexecuted_blocks=1 00:05:33.481 00:05:33.481 ' 00:05:33.481 21:11:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.481 --rc genhtml_branch_coverage=1 00:05:33.481 --rc genhtml_function_coverage=1 00:05:33.481 --rc genhtml_legend=1 00:05:33.481 --rc geninfo_all_blocks=1 00:05:33.481 --rc geninfo_unexecuted_blocks=1 00:05:33.481 00:05:33.481 ' 00:05:33.481 21:11:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.481 --rc genhtml_branch_coverage=1 00:05:33.481 --rc genhtml_function_coverage=1 00:05:33.481 --rc genhtml_legend=1 00:05:33.481 --rc geninfo_all_blocks=1 00:05:33.481 --rc geninfo_unexecuted_blocks=1 00:05:33.481 00:05:33.481 ' 00:05:33.481 21:11:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.481 --rc genhtml_branch_coverage=1 00:05:33.481 --rc genhtml_function_coverage=1 00:05:33.481 --rc genhtml_legend=1 00:05:33.481 --rc geninfo_all_blocks=1 00:05:33.481 --rc geninfo_unexecuted_blocks=1 00:05:33.481 00:05:33.481 ' 00:05:33.481 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:33.481 21:11:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:33.481 21:11:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.481 21:11:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.481 21:11:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.481 21:11:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.481 21:11:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:10f3d47c-ff38-4224-8971-148f962d5374 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=10f3d47c-ff38-4224-8971-148f962d5374 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:33.482 21:11:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.482 21:11:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.482 21:11:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.482 21:11:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.482 21:11:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.482 21:11:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.482 21:11:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.482 21:11:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:33.482 21:11:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.482 21:11:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.482 INFO: launching applications... 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:33.482 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71538 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.482 Waiting for target to run... 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71538 /var/tmp/spdk_tgt.sock 00:05:33.482 21:11:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71538 ']' 00:05:33.482 21:11:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.482 21:11:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.482 21:11:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.482 21:11:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.482 21:11:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.482 21:11:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:33.482 [2024-12-16 21:11:23.160305] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:33.482 [2024-12-16 21:11:23.160602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71538 ] 00:05:34.055 [2024-12-16 21:11:23.502021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.055 [2024-12-16 21:11:23.516130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.314 21:11:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.314 21:11:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:34.314 00:05:34.314 INFO: shutting down applications... 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:34.314 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:34.314 21:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71538 ]] 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71538 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71538 00:05:34.314 21:11:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.883 21:11:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.883 21:11:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.883 21:11:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71538 00:05:34.883 21:11:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.883 21:11:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:34.883 21:11:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.883 21:11:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.883 SPDK target shutdown done 00:05:34.883 21:11:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:34.883 Success 00:05:34.883 00:05:34.883 real 0m1.555s 00:05:34.883 user 0m1.203s 00:05:34.883 sys 0m0.356s 00:05:34.883 21:11:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.883 ************************************ 00:05:34.883 END TEST json_config_extra_key 00:05:34.883 ************************************ 00:05:34.883 21:11:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.883 21:11:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.883 21:11:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.883 21:11:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.883 21:11:24 -- common/autotest_common.sh@10 -- # set +x 00:05:34.883 ************************************ 00:05:34.883 START TEST alias_rpc 00:05:34.883 ************************************ 00:05:34.883 21:11:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.145 * Looking for test storage... 00:05:35.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.145 21:11:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.145 --rc genhtml_branch_coverage=1 00:05:35.145 --rc genhtml_function_coverage=1 00:05:35.145 --rc genhtml_legend=1 00:05:35.145 --rc geninfo_all_blocks=1 00:05:35.145 --rc geninfo_unexecuted_blocks=1 00:05:35.145 00:05:35.145 ' 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.145 --rc genhtml_branch_coverage=1 00:05:35.145 --rc genhtml_function_coverage=1 00:05:35.145 --rc genhtml_legend=1 00:05:35.145 --rc geninfo_all_blocks=1 00:05:35.145 --rc geninfo_unexecuted_blocks=1 00:05:35.145 00:05:35.145 ' 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.145 --rc genhtml_branch_coverage=1 00:05:35.145 --rc genhtml_function_coverage=1 00:05:35.145 --rc genhtml_legend=1 00:05:35.145 --rc geninfo_all_blocks=1 00:05:35.145 --rc geninfo_unexecuted_blocks=1 00:05:35.145 00:05:35.145 ' 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.145 --rc genhtml_branch_coverage=1 00:05:35.145 --rc genhtml_function_coverage=1 00:05:35.145 --rc genhtml_legend=1 00:05:35.145 --rc geninfo_all_blocks=1 00:05:35.145 --rc geninfo_unexecuted_blocks=1 00:05:35.145 00:05:35.145 ' 00:05:35.145 21:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.145 21:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71611 00:05:35.145 21:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71611 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71611 ']' 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.145 21:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.145 21:11:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.145 [2024-12-16 21:11:24.774011] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:35.145 [2024-12-16 21:11:24.774394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71611 ] 00:05:35.407 [2024-12-16 21:11:24.921571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.407 [2024-12-16 21:11:24.950335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.985 21:11:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.985 21:11:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.985 21:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.246 21:11:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71611 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71611 ']' 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71611 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71611 00:05:36.246 killing process with pid 71611 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71611' 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@973 -- # kill 71611 00:05:36.246 21:11:25 alias_rpc -- common/autotest_common.sh@978 -- # wait 71611 00:05:36.509 ************************************ 00:05:36.509 END TEST alias_rpc 00:05:36.509 ************************************ 00:05:36.509 00:05:36.509 real 0m1.631s 00:05:36.509 user 0m1.730s 00:05:36.509 sys 0m0.445s 00:05:36.509 21:11:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.509 21:11:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.772 21:11:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:36.772 21:11:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:36.772 21:11:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.772 21:11:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.772 21:11:26 -- common/autotest_common.sh@10 -- # set +x 00:05:36.772 ************************************ 00:05:36.772 START TEST spdkcli_tcp 00:05:36.772 ************************************ 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:36.772 * Looking for test storage... 00:05:36.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.772 21:11:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.772 --rc genhtml_branch_coverage=1 00:05:36.772 --rc genhtml_function_coverage=1 00:05:36.772 --rc genhtml_legend=1 00:05:36.772 --rc geninfo_all_blocks=1 00:05:36.772 --rc geninfo_unexecuted_blocks=1 00:05:36.772 00:05:36.772 ' 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.772 --rc genhtml_branch_coverage=1 00:05:36.772 --rc genhtml_function_coverage=1 00:05:36.772 --rc genhtml_legend=1 00:05:36.772 --rc geninfo_all_blocks=1 00:05:36.772 --rc geninfo_unexecuted_blocks=1 00:05:36.772 00:05:36.772 ' 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.772 --rc genhtml_branch_coverage=1 00:05:36.772 --rc genhtml_function_coverage=1 00:05:36.772 --rc genhtml_legend=1 00:05:36.772 --rc geninfo_all_blocks=1 00:05:36.772 --rc geninfo_unexecuted_blocks=1 00:05:36.772 00:05:36.772 ' 00:05:36.772 21:11:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.772 --rc genhtml_branch_coverage=1 00:05:36.772 --rc genhtml_function_coverage=1 00:05:36.772 --rc genhtml_legend=1 00:05:36.772 --rc geninfo_all_blocks=1 00:05:36.772 --rc geninfo_unexecuted_blocks=1 00:05:36.772 00:05:36.772 ' 00:05:36.772 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:36.772 21:11:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:36.772 21:11:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:36.772 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:36.772 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:36.772 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:36.773 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.773 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71691 00:05:36.773 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71691 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71691 ']' 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.773 21:11:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.773 21:11:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.034 [2024-12-16 21:11:26.495055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:37.034 [2024-12-16 21:11:26.495198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71691 ] 00:05:37.034 [2024-12-16 21:11:26.642509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.034 [2024-12-16 21:11:26.672682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.034 [2024-12-16 21:11:26.672708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.980 21:11:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.980 21:11:27 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:37.980 21:11:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71708 00:05:37.980 21:11:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:37.980 21:11:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:37.980 [ 00:05:37.980 "bdev_malloc_delete", 00:05:37.980 "bdev_malloc_create", 00:05:37.980 "bdev_null_resize", 00:05:37.980 "bdev_null_delete", 00:05:37.980 "bdev_null_create", 00:05:37.980 "bdev_nvme_cuse_unregister", 00:05:37.980 "bdev_nvme_cuse_register", 00:05:37.980 "bdev_opal_new_user", 00:05:37.980 "bdev_opal_set_lock_state", 00:05:37.980 "bdev_opal_delete", 00:05:37.980 "bdev_opal_get_info", 00:05:37.980 "bdev_opal_create", 00:05:37.980 "bdev_nvme_opal_revert", 00:05:37.980 "bdev_nvme_opal_init", 00:05:37.980 "bdev_nvme_send_cmd", 00:05:37.980 "bdev_nvme_set_keys", 00:05:37.980 "bdev_nvme_get_path_iostat", 00:05:37.980 "bdev_nvme_get_mdns_discovery_info", 00:05:37.980 "bdev_nvme_stop_mdns_discovery", 00:05:37.980 "bdev_nvme_start_mdns_discovery", 00:05:37.980 "bdev_nvme_set_multipath_policy", 00:05:37.980 "bdev_nvme_set_preferred_path", 00:05:37.980 "bdev_nvme_get_io_paths", 00:05:37.980 "bdev_nvme_remove_error_injection", 00:05:37.980 "bdev_nvme_add_error_injection", 00:05:37.980 "bdev_nvme_get_discovery_info", 00:05:37.980 "bdev_nvme_stop_discovery", 00:05:37.980 "bdev_nvme_start_discovery", 00:05:37.980 "bdev_nvme_get_controller_health_info", 00:05:37.980 "bdev_nvme_disable_controller", 00:05:37.980 "bdev_nvme_enable_controller", 00:05:37.980 "bdev_nvme_reset_controller", 00:05:37.980 "bdev_nvme_get_transport_statistics", 00:05:37.980 "bdev_nvme_apply_firmware", 00:05:37.980 "bdev_nvme_detach_controller", 00:05:37.980 "bdev_nvme_get_controllers", 00:05:37.980 "bdev_nvme_attach_controller", 00:05:37.980 "bdev_nvme_set_hotplug", 00:05:37.980 "bdev_nvme_set_options", 00:05:37.980 "bdev_passthru_delete", 00:05:37.980 "bdev_passthru_create", 00:05:37.980 "bdev_lvol_set_parent_bdev", 00:05:37.980 "bdev_lvol_set_parent", 00:05:37.980 "bdev_lvol_check_shallow_copy", 00:05:37.980 "bdev_lvol_start_shallow_copy", 00:05:37.980 "bdev_lvol_grow_lvstore", 00:05:37.980 "bdev_lvol_get_lvols", 00:05:37.980 "bdev_lvol_get_lvstores", 00:05:37.980 "bdev_lvol_delete", 00:05:37.980 "bdev_lvol_set_read_only", 00:05:37.980 "bdev_lvol_resize", 00:05:37.980 "bdev_lvol_decouple_parent", 00:05:37.980 "bdev_lvol_inflate", 00:05:37.980 "bdev_lvol_rename", 00:05:37.980 "bdev_lvol_clone_bdev", 00:05:37.980 "bdev_lvol_clone", 00:05:37.980 "bdev_lvol_snapshot", 00:05:37.980 "bdev_lvol_create", 00:05:37.980 "bdev_lvol_delete_lvstore", 00:05:37.980 "bdev_lvol_rename_lvstore", 00:05:37.980 "bdev_lvol_create_lvstore", 00:05:37.980 "bdev_raid_set_options", 00:05:37.980 "bdev_raid_remove_base_bdev", 00:05:37.980 "bdev_raid_add_base_bdev", 00:05:37.980 "bdev_raid_delete", 00:05:37.980 "bdev_raid_create", 00:05:37.980 "bdev_raid_get_bdevs", 00:05:37.980 "bdev_error_inject_error", 00:05:37.980 "bdev_error_delete", 00:05:37.980 "bdev_error_create", 00:05:37.980 "bdev_split_delete", 00:05:37.980 "bdev_split_create", 00:05:37.980 "bdev_delay_delete", 00:05:37.980 "bdev_delay_create", 00:05:37.980 "bdev_delay_update_latency", 00:05:37.980 "bdev_zone_block_delete", 00:05:37.980 "bdev_zone_block_create", 00:05:37.980 "blobfs_create", 00:05:37.980 "blobfs_detect", 00:05:37.980 "blobfs_set_cache_size", 00:05:37.980 "bdev_xnvme_delete", 00:05:37.980 "bdev_xnvme_create", 00:05:37.980 "bdev_aio_delete", 00:05:37.980 "bdev_aio_rescan", 00:05:37.980 "bdev_aio_create", 00:05:37.980 "bdev_ftl_set_property", 00:05:37.980 "bdev_ftl_get_properties", 00:05:37.980 "bdev_ftl_get_stats", 00:05:37.980 "bdev_ftl_unmap", 00:05:37.980 "bdev_ftl_unload", 00:05:37.980 "bdev_ftl_delete", 00:05:37.980 "bdev_ftl_load", 00:05:37.980 "bdev_ftl_create", 00:05:37.980 "bdev_virtio_attach_controller", 00:05:37.980 "bdev_virtio_scsi_get_devices", 00:05:37.980 "bdev_virtio_detach_controller", 00:05:37.980 "bdev_virtio_blk_set_hotplug", 00:05:37.980 "bdev_iscsi_delete", 00:05:37.980 "bdev_iscsi_create", 00:05:37.980 "bdev_iscsi_set_options", 00:05:37.980 "accel_error_inject_error", 00:05:37.980 "ioat_scan_accel_module", 00:05:37.980 "dsa_scan_accel_module", 00:05:37.980 "iaa_scan_accel_module", 00:05:37.980 "keyring_file_remove_key", 00:05:37.980 "keyring_file_add_key", 00:05:37.980 "keyring_linux_set_options", 00:05:37.980 "fsdev_aio_delete", 00:05:37.980 "fsdev_aio_create", 00:05:37.981 "iscsi_get_histogram", 00:05:37.981 "iscsi_enable_histogram", 00:05:37.981 "iscsi_set_options", 00:05:37.981 "iscsi_get_auth_groups", 00:05:37.981 "iscsi_auth_group_remove_secret", 00:05:37.981 "iscsi_auth_group_add_secret", 00:05:37.981 "iscsi_delete_auth_group", 00:05:37.981 "iscsi_create_auth_group", 00:05:37.981 "iscsi_set_discovery_auth", 00:05:37.981 "iscsi_get_options", 00:05:37.981 "iscsi_target_node_request_logout", 00:05:37.981 "iscsi_target_node_set_redirect", 00:05:37.981 "iscsi_target_node_set_auth", 00:05:37.981 "iscsi_target_node_add_lun", 00:05:37.981 "iscsi_get_stats", 00:05:37.981 "iscsi_get_connections", 00:05:37.981 "iscsi_portal_group_set_auth", 00:05:37.981 "iscsi_start_portal_group", 00:05:37.981 "iscsi_delete_portal_group", 00:05:37.981 "iscsi_create_portal_group", 00:05:37.981 "iscsi_get_portal_groups", 00:05:37.981 "iscsi_delete_target_node", 00:05:37.981 "iscsi_target_node_remove_pg_ig_maps", 00:05:37.981 "iscsi_target_node_add_pg_ig_maps", 00:05:37.981 "iscsi_create_target_node", 00:05:37.981 "iscsi_get_target_nodes", 00:05:37.981 "iscsi_delete_initiator_group", 00:05:37.981 "iscsi_initiator_group_remove_initiators", 00:05:37.981 "iscsi_initiator_group_add_initiators", 00:05:37.981 "iscsi_create_initiator_group", 00:05:37.981 "iscsi_get_initiator_groups", 00:05:37.981 "nvmf_set_crdt", 00:05:37.981 "nvmf_set_config", 00:05:37.981 "nvmf_set_max_subsystems", 00:05:37.981 "nvmf_stop_mdns_prr", 00:05:37.981 "nvmf_publish_mdns_prr", 00:05:37.981 "nvmf_subsystem_get_listeners", 00:05:37.981 "nvmf_subsystem_get_qpairs", 00:05:37.981 "nvmf_subsystem_get_controllers", 00:05:37.981 "nvmf_get_stats", 00:05:37.981 "nvmf_get_transports", 00:05:37.981 "nvmf_create_transport", 00:05:37.981 "nvmf_get_targets", 00:05:37.981 "nvmf_delete_target", 00:05:37.981 "nvmf_create_target", 00:05:37.981 "nvmf_subsystem_allow_any_host", 00:05:37.981 "nvmf_subsystem_set_keys", 00:05:37.981 "nvmf_subsystem_remove_host", 00:05:37.981 "nvmf_subsystem_add_host", 00:05:37.981 "nvmf_ns_remove_host", 00:05:37.981 "nvmf_ns_add_host", 00:05:37.981 "nvmf_subsystem_remove_ns", 00:05:37.981 "nvmf_subsystem_set_ns_ana_group", 00:05:37.981 "nvmf_subsystem_add_ns", 00:05:37.981 "nvmf_subsystem_listener_set_ana_state", 00:05:37.981 "nvmf_discovery_get_referrals", 00:05:37.981 "nvmf_discovery_remove_referral", 00:05:37.981 "nvmf_discovery_add_referral", 00:05:37.981 "nvmf_subsystem_remove_listener", 00:05:37.981 "nvmf_subsystem_add_listener", 00:05:37.981 "nvmf_delete_subsystem", 00:05:37.981 "nvmf_create_subsystem", 00:05:37.981 "nvmf_get_subsystems", 00:05:37.981 "env_dpdk_get_mem_stats", 00:05:37.981 "nbd_get_disks", 00:05:37.981 "nbd_stop_disk", 00:05:37.981 "nbd_start_disk", 00:05:37.981 "ublk_recover_disk", 00:05:37.981 "ublk_get_disks", 00:05:37.981 "ublk_stop_disk", 00:05:37.981 "ublk_start_disk", 00:05:37.981 "ublk_destroy_target", 00:05:37.981 "ublk_create_target", 00:05:37.981 "virtio_blk_create_transport", 00:05:37.981 "virtio_blk_get_transports", 00:05:37.981 "vhost_controller_set_coalescing", 00:05:37.981 "vhost_get_controllers", 00:05:37.981 "vhost_delete_controller", 00:05:37.981 "vhost_create_blk_controller", 00:05:37.981 "vhost_scsi_controller_remove_target", 00:05:37.981 "vhost_scsi_controller_add_target", 00:05:37.981 "vhost_start_scsi_controller", 00:05:37.981 "vhost_create_scsi_controller", 00:05:37.981 "thread_set_cpumask", 00:05:37.981 "scheduler_set_options", 00:05:37.981 "framework_get_governor", 00:05:37.981 "framework_get_scheduler", 00:05:37.981 "framework_set_scheduler", 00:05:37.981 "framework_get_reactors", 00:05:37.981 "thread_get_io_channels", 00:05:37.981 "thread_get_pollers", 00:05:37.981 "thread_get_stats", 00:05:37.981 "framework_monitor_context_switch", 00:05:37.981 "spdk_kill_instance", 00:05:37.981 "log_enable_timestamps", 00:05:37.981 "log_get_flags", 00:05:37.981 "log_clear_flag", 00:05:37.981 "log_set_flag", 00:05:37.981 "log_get_level", 00:05:37.981 "log_set_level", 00:05:37.981 "log_get_print_level", 00:05:37.981 "log_set_print_level", 00:05:37.981 "framework_enable_cpumask_locks", 00:05:37.981 "framework_disable_cpumask_locks", 00:05:37.981 "framework_wait_init", 00:05:37.981 "framework_start_init", 00:05:37.981 "scsi_get_devices", 00:05:37.981 "bdev_get_histogram", 00:05:37.981 "bdev_enable_histogram", 00:05:37.981 "bdev_set_qos_limit", 00:05:37.981 "bdev_set_qd_sampling_period", 00:05:37.981 "bdev_get_bdevs", 00:05:37.981 "bdev_reset_iostat", 00:05:37.981 "bdev_get_iostat", 00:05:37.981 "bdev_examine", 00:05:37.981 "bdev_wait_for_examine", 00:05:37.981 "bdev_set_options", 00:05:37.981 "accel_get_stats", 00:05:37.981 "accel_set_options", 00:05:37.981 "accel_set_driver", 00:05:37.981 "accel_crypto_key_destroy", 00:05:37.981 "accel_crypto_keys_get", 00:05:37.981 "accel_crypto_key_create", 00:05:37.981 "accel_assign_opc", 00:05:37.981 "accel_get_module_info", 00:05:37.981 "accel_get_opc_assignments", 00:05:37.981 "vmd_rescan", 00:05:37.981 "vmd_remove_device", 00:05:37.981 "vmd_enable", 00:05:37.981 "sock_get_default_impl", 00:05:37.981 "sock_set_default_impl", 00:05:37.981 "sock_impl_set_options", 00:05:37.981 "sock_impl_get_options", 00:05:37.981 "iobuf_get_stats", 00:05:37.981 "iobuf_set_options", 00:05:37.981 "keyring_get_keys", 00:05:37.981 "framework_get_pci_devices", 00:05:37.981 "framework_get_config", 00:05:37.981 "framework_get_subsystems", 00:05:37.981 "fsdev_set_opts", 00:05:37.981 "fsdev_get_opts", 00:05:37.981 "trace_get_info", 00:05:37.981 "trace_get_tpoint_group_mask", 00:05:37.982 "trace_disable_tpoint_group", 00:05:37.982 "trace_enable_tpoint_group", 00:05:37.982 "trace_clear_tpoint_mask", 00:05:37.982 "trace_set_tpoint_mask", 00:05:37.982 "notify_get_notifications", 00:05:37.982 "notify_get_types", 00:05:37.982 "spdk_get_version", 00:05:37.982 "rpc_get_methods" 00:05:37.982 ] 00:05:37.982 21:11:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.982 21:11:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:37.982 21:11:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71691 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71691 ']' 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71691 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71691 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.982 killing process with pid 71691 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71691' 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71691 00:05:37.982 21:11:27 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71691 00:05:38.556 ************************************ 00:05:38.556 END TEST spdkcli_tcp 00:05:38.556 ************************************ 00:05:38.556 00:05:38.556 real 0m1.700s 00:05:38.556 user 0m2.984s 00:05:38.556 sys 0m0.481s 00:05:38.556 21:11:27 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.556 21:11:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.556 21:11:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.556 21:11:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.556 21:11:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.556 21:11:28 -- common/autotest_common.sh@10 -- # set +x 00:05:38.556 ************************************ 00:05:38.556 START TEST dpdk_mem_utility 00:05:38.556 ************************************ 00:05:38.556 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.556 * Looking for test storage... 00:05:38.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:38.556 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.556 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.556 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.556 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.556 21:11:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.557 21:11:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.557 --rc genhtml_branch_coverage=1 00:05:38.557 --rc genhtml_function_coverage=1 00:05:38.557 --rc genhtml_legend=1 00:05:38.557 --rc geninfo_all_blocks=1 00:05:38.557 --rc geninfo_unexecuted_blocks=1 00:05:38.557 00:05:38.557 ' 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.557 --rc genhtml_branch_coverage=1 00:05:38.557 --rc genhtml_function_coverage=1 00:05:38.557 --rc genhtml_legend=1 00:05:38.557 --rc geninfo_all_blocks=1 00:05:38.557 --rc geninfo_unexecuted_blocks=1 00:05:38.557 00:05:38.557 ' 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.557 --rc genhtml_branch_coverage=1 00:05:38.557 --rc genhtml_function_coverage=1 00:05:38.557 --rc genhtml_legend=1 00:05:38.557 --rc geninfo_all_blocks=1 00:05:38.557 --rc geninfo_unexecuted_blocks=1 00:05:38.557 00:05:38.557 ' 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.557 --rc genhtml_branch_coverage=1 00:05:38.557 --rc genhtml_function_coverage=1 00:05:38.557 --rc genhtml_legend=1 00:05:38.557 --rc geninfo_all_blocks=1 00:05:38.557 --rc geninfo_unexecuted_blocks=1 00:05:38.557 00:05:38.557 ' 00:05:38.557 21:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.557 21:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71791 00:05:38.557 21:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71791 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71791 ']' 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.557 21:11:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.557 21:11:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.557 [2024-12-16 21:11:28.232034] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:38.557 [2024-12-16 21:11:28.232309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71791 ] 00:05:38.819 [2024-12-16 21:11:28.371423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.819 [2024-12-16 21:11:28.390108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.392 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.392 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:39.392 21:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.392 21:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.392 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.392 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.392 { 00:05:39.392 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.392 } 00:05:39.392 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.392 21:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:39.656 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:39.656 1 heaps totaling size 818.000000 MiB 00:05:39.656 size: 818.000000 MiB heap id: 0 00:05:39.656 end heaps---------- 00:05:39.656 9 mempools totaling size 603.782043 MiB 00:05:39.656 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.656 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.656 size: 100.555481 MiB name: bdev_io_71791 00:05:39.656 size: 50.003479 MiB name: msgpool_71791 00:05:39.656 size: 36.509338 MiB name: fsdev_io_71791 00:05:39.656 size: 21.763794 MiB name: PDU_Pool 00:05:39.656 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.656 size: 4.133484 MiB name: evtpool_71791 00:05:39.656 size: 0.026123 MiB name: Session_Pool 00:05:39.656 end mempools------- 00:05:39.656 6 memzones totaling size 4.142822 MiB 00:05:39.656 size: 1.000366 MiB name: RG_ring_0_71791 00:05:39.656 size: 1.000366 MiB name: RG_ring_1_71791 00:05:39.656 size: 1.000366 MiB name: RG_ring_4_71791 00:05:39.656 size: 1.000366 MiB name: RG_ring_5_71791 00:05:39.656 size: 0.125366 MiB name: RG_ring_2_71791 00:05:39.656 size: 0.015991 MiB name: RG_ring_3_71791 00:05:39.656 end memzones------- 00:05:39.656 21:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.656 heap id: 0 total size: 818.000000 MiB number of busy elements: 315 number of free elements: 15 00:05:39.656 list of free elements. size: 10.802856 MiB 00:05:39.656 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:39.656 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:39.656 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:39.656 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:39.656 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:39.656 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:39.656 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:39.656 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:39.656 element at address: 0x20001ae00000 with size: 0.568054 MiB 00:05:39.656 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:39.656 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:39.656 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:39.656 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:39.656 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:39.656 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:39.656 list of standard malloc elements. size: 199.268250 MiB 00:05:39.656 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:39.656 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:39.656 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:39.656 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:39.656 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:39.656 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:39.656 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:39.656 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:39.656 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:39.656 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:39.656 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:39.656 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:39.657 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:39.657 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:39.658 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:39.658 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:39.658 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:39.658 list of memzone associated elements. size: 607.928894 MiB 00:05:39.658 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:39.658 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.658 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:39.658 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.658 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:39.658 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71791_0 00:05:39.658 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:39.658 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71791_0 00:05:39.658 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:39.658 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71791_0 00:05:39.658 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:39.658 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.658 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:39.658 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.658 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:39.658 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71791_0 00:05:39.658 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:39.658 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71791 00:05:39.658 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:39.658 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71791 00:05:39.658 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:39.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.659 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:39.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.659 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:39.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.659 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:39.659 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.659 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:39.659 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71791 00:05:39.659 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:39.659 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71791 00:05:39.659 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:39.659 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71791 00:05:39.659 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:39.659 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71791 00:05:39.659 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:39.659 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71791 00:05:39.659 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:39.659 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71791 00:05:39.659 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:39.659 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.659 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:39.659 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.659 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:39.659 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.659 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:39.659 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71791 00:05:39.659 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:39.659 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71791 00:05:39.659 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:39.659 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.659 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:39.659 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.659 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:39.659 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71791 00:05:39.659 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:39.659 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.659 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:39.659 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71791 00:05:39.659 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:39.659 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71791 00:05:39.659 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:39.659 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71791 00:05:39.659 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:39.659 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.659 21:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.659 21:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71791 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71791 ']' 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71791 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71791 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71791' 00:05:39.659 killing process with pid 71791 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71791 00:05:39.659 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71791 00:05:39.921 00:05:39.921 real 0m1.486s 00:05:39.921 user 0m1.572s 00:05:39.922 sys 0m0.337s 00:05:39.922 ************************************ 00:05:39.922 END TEST dpdk_mem_utility 00:05:39.922 ************************************ 00:05:39.922 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.922 21:11:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.922 21:11:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:39.922 21:11:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.922 21:11:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.922 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.922 ************************************ 00:05:39.922 START TEST event 00:05:39.922 ************************************ 00:05:39.922 21:11:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:40.184 * Looking for test storage... 00:05:40.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:40.184 21:11:29 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.184 21:11:29 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.184 21:11:29 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.184 21:11:29 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.184 21:11:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.184 21:11:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.184 21:11:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.184 21:11:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.184 21:11:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.184 21:11:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.184 21:11:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.184 21:11:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.184 21:11:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.184 21:11:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.184 21:11:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.184 21:11:29 event -- scripts/common.sh@344 -- # case "$op" in 00:05:40.184 21:11:29 event -- scripts/common.sh@345 -- # : 1 00:05:40.184 21:11:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.184 21:11:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.184 21:11:29 event -- scripts/common.sh@365 -- # decimal 1 00:05:40.184 21:11:29 event -- scripts/common.sh@353 -- # local d=1 00:05:40.184 21:11:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.184 21:11:29 event -- scripts/common.sh@355 -- # echo 1 00:05:40.184 21:11:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.184 21:11:29 event -- scripts/common.sh@366 -- # decimal 2 00:05:40.184 21:11:29 event -- scripts/common.sh@353 -- # local d=2 00:05:40.184 21:11:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.184 21:11:29 event -- scripts/common.sh@355 -- # echo 2 00:05:40.184 21:11:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.184 21:11:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.184 21:11:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.184 21:11:29 event -- scripts/common.sh@368 -- # return 0 00:05:40.184 21:11:29 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.184 21:11:29 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.184 --rc genhtml_branch_coverage=1 00:05:40.184 --rc genhtml_function_coverage=1 00:05:40.185 --rc genhtml_legend=1 00:05:40.185 --rc geninfo_all_blocks=1 00:05:40.185 --rc geninfo_unexecuted_blocks=1 00:05:40.185 00:05:40.185 ' 00:05:40.185 21:11:29 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.185 --rc genhtml_branch_coverage=1 00:05:40.185 --rc genhtml_function_coverage=1 00:05:40.185 --rc genhtml_legend=1 00:05:40.185 --rc geninfo_all_blocks=1 00:05:40.185 --rc geninfo_unexecuted_blocks=1 00:05:40.185 00:05:40.185 ' 00:05:40.185 21:11:29 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.185 --rc genhtml_branch_coverage=1 00:05:40.185 --rc genhtml_function_coverage=1 00:05:40.185 --rc genhtml_legend=1 00:05:40.185 --rc geninfo_all_blocks=1 00:05:40.185 --rc geninfo_unexecuted_blocks=1 00:05:40.185 00:05:40.185 ' 00:05:40.185 21:11:29 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.185 --rc genhtml_branch_coverage=1 00:05:40.185 --rc genhtml_function_coverage=1 00:05:40.185 --rc genhtml_legend=1 00:05:40.185 --rc geninfo_all_blocks=1 00:05:40.185 --rc geninfo_unexecuted_blocks=1 00:05:40.185 00:05:40.185 ' 00:05:40.185 21:11:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:40.185 21:11:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.185 21:11:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.185 21:11:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:40.185 21:11:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.185 21:11:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.185 ************************************ 00:05:40.185 START TEST event_perf 00:05:40.185 ************************************ 00:05:40.185 21:11:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.185 Running I/O for 1 seconds...[2024-12-16 21:11:29.750714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:40.185 [2024-12-16 21:11:29.750967] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71870 ] 00:05:40.446 [2024-12-16 21:11:29.899344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.446 [2024-12-16 21:11:29.931802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.446 [2024-12-16 21:11:29.932124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.446 [2024-12-16 21:11:29.932961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.446 Running I/O for 1 seconds...[2024-12-16 21:11:29.933049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.388 00:05:41.388 lcore 0: 136517 00:05:41.388 lcore 1: 136517 00:05:41.388 lcore 2: 136516 00:05:41.388 lcore 3: 136516 00:05:41.388 done. 00:05:41.388 00:05:41.388 real 0m1.271s 00:05:41.388 user 0m4.061s 00:05:41.388 sys 0m0.088s 00:05:41.388 21:11:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.388 21:11:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.388 ************************************ 00:05:41.388 END TEST event_perf 00:05:41.388 ************************************ 00:05:41.388 21:11:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:41.388 21:11:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:41.388 21:11:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.388 21:11:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.388 ************************************ 00:05:41.388 START TEST event_reactor 00:05:41.388 ************************************ 00:05:41.388 21:11:31 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:41.388 [2024-12-16 21:11:31.086882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:41.388 [2024-12-16 21:11:31.086993] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71905 ] 00:05:41.650 [2024-12-16 21:11:31.231300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.650 [2024-12-16 21:11:31.249648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.595 test_start 00:05:42.595 oneshot 00:05:42.595 tick 100 00:05:42.595 tick 100 00:05:42.595 tick 250 00:05:42.595 tick 100 00:05:42.595 tick 100 00:05:42.595 tick 100 00:05:42.595 tick 250 00:05:42.595 tick 500 00:05:42.595 tick 100 00:05:42.595 tick 100 00:05:42.595 tick 250 00:05:42.595 tick 100 00:05:42.595 tick 100 00:05:42.595 test_end 00:05:42.595 00:05:42.595 real 0m1.228s 00:05:42.595 user 0m1.068s 00:05:42.595 sys 0m0.053s 00:05:42.595 ************************************ 00:05:42.595 END TEST event_reactor 00:05:42.595 ************************************ 00:05:42.595 21:11:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.595 21:11:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.856 21:11:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.856 21:11:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:42.856 21:11:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.856 21:11:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.856 ************************************ 00:05:42.856 START TEST event_reactor_perf 00:05:42.856 ************************************ 00:05:42.856 21:11:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.856 [2024-12-16 21:11:32.377942] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:42.856 [2024-12-16 21:11:32.378157] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71942 ] 00:05:42.856 [2024-12-16 21:11:32.523026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.856 [2024-12-16 21:11:32.541067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.245 test_start 00:05:44.245 test_end 00:05:44.245 Performance: 313964 events per second 00:05:44.245 ************************************ 00:05:44.245 00:05:44.245 real 0m1.231s 00:05:44.245 user 0m1.068s 00:05:44.245 sys 0m0.056s 00:05:44.245 21:11:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.245 21:11:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.245 END TEST event_reactor_perf 00:05:44.245 ************************************ 00:05:44.245 21:11:33 event -- event/event.sh@49 -- # uname -s 00:05:44.245 21:11:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.245 21:11:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:44.245 21:11:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.245 21:11:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.245 21:11:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.245 ************************************ 00:05:44.245 START TEST event_scheduler 00:05:44.245 ************************************ 00:05:44.245 21:11:33 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:44.245 * Looking for test storage... 00:05:44.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:44.245 21:11:33 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.245 21:11:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.245 21:11:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.245 21:11:33 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:44.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.246 21:11:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.246 --rc genhtml_branch_coverage=1 00:05:44.246 --rc genhtml_function_coverage=1 00:05:44.246 --rc genhtml_legend=1 00:05:44.246 --rc geninfo_all_blocks=1 00:05:44.246 --rc geninfo_unexecuted_blocks=1 00:05:44.246 00:05:44.246 ' 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.246 --rc genhtml_branch_coverage=1 00:05:44.246 --rc genhtml_function_coverage=1 00:05:44.246 --rc genhtml_legend=1 00:05:44.246 --rc geninfo_all_blocks=1 00:05:44.246 --rc geninfo_unexecuted_blocks=1 00:05:44.246 00:05:44.246 ' 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.246 --rc genhtml_branch_coverage=1 00:05:44.246 --rc genhtml_function_coverage=1 00:05:44.246 --rc genhtml_legend=1 00:05:44.246 --rc geninfo_all_blocks=1 00:05:44.246 --rc geninfo_unexecuted_blocks=1 00:05:44.246 00:05:44.246 ' 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.246 --rc genhtml_branch_coverage=1 00:05:44.246 --rc genhtml_function_coverage=1 00:05:44.246 --rc genhtml_legend=1 00:05:44.246 --rc geninfo_all_blocks=1 00:05:44.246 --rc geninfo_unexecuted_blocks=1 00:05:44.246 00:05:44.246 ' 00:05:44.246 21:11:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.246 21:11:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72007 00:05:44.246 21:11:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.246 21:11:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72007 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72007 ']' 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.246 21:11:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.246 21:11:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.246 [2024-12-16 21:11:33.856232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.246 [2024-12-16 21:11:33.856348] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72007 ] 00:05:44.508 [2024-12-16 21:11:33.994420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.508 [2024-12-16 21:11:34.016677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.508 [2024-12-16 21:11:34.016818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.508 [2024-12-16 21:11:34.017173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.508 [2024-12-16 21:11:34.017201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:45.081 21:11:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.081 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.081 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.081 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.081 POWER: Cannot set governor of lcore 0 to performance 00:05:45.081 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.081 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.081 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:45.081 POWER: Unable to set Power Management Environment for lcore 0 00:05:45.081 [2024-12-16 21:11:34.702658] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:45.081 [2024-12-16 21:11:34.702677] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:45.081 [2024-12-16 21:11:34.702696] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.081 [2024-12-16 21:11:34.702711] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.081 [2024-12-16 21:11:34.702718] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.081 [2024-12-16 21:11:34.702727] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.081 21:11:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.081 [2024-12-16 21:11:34.763769] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.081 21:11:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.081 21:11:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.081 ************************************ 00:05:45.081 START TEST scheduler_create_thread 00:05:45.081 ************************************ 00:05:45.081 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:45.081 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.081 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.081 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 2 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 3 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 4 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 5 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 6 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 7 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 8 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.343 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.343 9 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.344 10 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.344 21:11:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 21:11:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.726 21:11:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.726 21:11:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.726 21:11:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.727 21:11:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.102 ************************************ 00:05:48.102 END TEST scheduler_create_thread 00:05:48.102 ************************************ 00:05:48.102 21:11:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.102 00:05:48.102 real 0m2.611s 00:05:48.102 user 0m0.019s 00:05:48.102 sys 0m0.002s 00:05:48.102 21:11:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.102 21:11:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.102 21:11:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.102 21:11:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72007 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72007 ']' 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72007 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72007 00:05:48.102 killing process with pid 72007 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72007' 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72007 00:05:48.102 21:11:37 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72007 00:05:48.363 [2024-12-16 21:11:37.870725] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.363 00:05:48.363 real 0m4.359s 00:05:48.363 user 0m8.110s 00:05:48.363 sys 0m0.318s 00:05:48.363 21:11:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.363 ************************************ 00:05:48.363 END TEST event_scheduler 00:05:48.363 ************************************ 00:05:48.363 21:11:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.363 21:11:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.363 21:11:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.363 21:11:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.363 21:11:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.363 21:11:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.363 ************************************ 00:05:48.363 START TEST app_repeat 00:05:48.363 ************************************ 00:05:48.363 21:11:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:48.625 Process app_repeat pid: 72107 00:05:48.625 spdk_app_start Round 0 00:05:48.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72107 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72107' 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.625 21:11:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72107 /var/tmp/spdk-nbd.sock 00:05:48.625 21:11:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72107 ']' 00:05:48.625 21:11:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.625 21:11:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.625 21:11:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.625 21:11:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.625 21:11:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.625 [2024-12-16 21:11:38.101145] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:48.625 [2024-12-16 21:11:38.101258] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72107 ] 00:05:48.625 [2024-12-16 21:11:38.245923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.625 [2024-12-16 21:11:38.265789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.625 [2024-12-16 21:11:38.265896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.570 21:11:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.570 21:11:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.570 21:11:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.570 Malloc0 00:05:49.570 21:11:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.832 Malloc1 00:05:49.832 21:11:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.832 21:11:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.092 /dev/nbd0 00:05:50.092 21:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.092 21:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.092 21:11:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.092 21:11:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.092 21:11:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.093 1+0 records in 00:05:50.093 1+0 records out 00:05:50.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397792 s, 10.3 MB/s 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.093 21:11:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.093 21:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.093 21:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.093 21:11:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.355 /dev/nbd1 00:05:50.355 21:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.355 21:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.355 1+0 records in 00:05:50.355 1+0 records out 00:05:50.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286243 s, 14.3 MB/s 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.355 21:11:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.355 21:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.355 21:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.355 21:11:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.355 21:11:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.355 21:11:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.615 { 00:05:50.615 "nbd_device": "/dev/nbd0", 00:05:50.615 "bdev_name": "Malloc0" 00:05:50.615 }, 00:05:50.615 { 00:05:50.615 "nbd_device": "/dev/nbd1", 00:05:50.615 "bdev_name": "Malloc1" 00:05:50.615 } 00:05:50.615 ]' 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.615 { 00:05:50.615 "nbd_device": "/dev/nbd0", 00:05:50.615 "bdev_name": "Malloc0" 00:05:50.615 }, 00:05:50.615 { 00:05:50.615 "nbd_device": "/dev/nbd1", 00:05:50.615 "bdev_name": "Malloc1" 00:05:50.615 } 00:05:50.615 ]' 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.615 /dev/nbd1' 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.615 /dev/nbd1' 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.615 21:11:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.616 256+0 records in 00:05:50.616 256+0 records out 00:05:50.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00897155 s, 117 MB/s 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.616 256+0 records in 00:05:50.616 256+0 records out 00:05:50.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231417 s, 45.3 MB/s 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.616 256+0 records in 00:05:50.616 256+0 records out 00:05:50.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0365395 s, 28.7 MB/s 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.616 21:11:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.875 21:11:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.134 21:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.393 21:11:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.393 21:11:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.393 21:11:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.651 [2024-12-16 21:11:41.159021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.651 [2024-12-16 21:11:41.176815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.651 [2024-12-16 21:11:41.176974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.651 [2024-12-16 21:11:41.207973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.651 [2024-12-16 21:11:41.208164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.937 spdk_app_start Round 1 00:05:54.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.937 21:11:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.937 21:11:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:54.937 21:11:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72107 /var/tmp/spdk-nbd.sock 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72107 ']' 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.937 21:11:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.937 21:11:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.937 Malloc0 00:05:54.937 21:11:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.196 Malloc1 00:05:55.196 21:11:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.196 21:11:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.196 /dev/nbd0 00:05:55.457 21:11:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.457 21:11:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.457 1+0 records in 00:05:55.457 1+0 records out 00:05:55.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220448 s, 18.6 MB/s 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.457 21:11:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.457 21:11:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.458 21:11:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.458 21:11:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.458 /dev/nbd1 00:05:55.458 21:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.458 21:11:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.458 1+0 records in 00:05:55.458 1+0 records out 00:05:55.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000129358 s, 31.7 MB/s 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.458 21:11:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.458 21:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.458 21:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.718 { 00:05:55.718 "nbd_device": "/dev/nbd0", 00:05:55.718 "bdev_name": "Malloc0" 00:05:55.718 }, 00:05:55.718 { 00:05:55.718 "nbd_device": "/dev/nbd1", 00:05:55.718 "bdev_name": "Malloc1" 00:05:55.718 } 00:05:55.718 ]' 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.718 { 00:05:55.718 "nbd_device": "/dev/nbd0", 00:05:55.718 "bdev_name": "Malloc0" 00:05:55.718 }, 00:05:55.718 { 00:05:55.718 "nbd_device": "/dev/nbd1", 00:05:55.718 "bdev_name": "Malloc1" 00:05:55.718 } 00:05:55.718 ]' 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.718 /dev/nbd1' 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.718 /dev/nbd1' 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.718 256+0 records in 00:05:55.718 256+0 records out 00:05:55.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00751747 s, 139 MB/s 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.718 21:11:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.977 256+0 records in 00:05:55.977 256+0 records out 00:05:55.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013796 s, 76.0 MB/s 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.977 256+0 records in 00:05:55.977 256+0 records out 00:05:55.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141123 s, 74.3 MB/s 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.977 21:11:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.236 21:11:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.495 21:11:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.495 21:11:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.753 21:11:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.753 [2024-12-16 21:11:46.400463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.753 [2024-12-16 21:11:46.415706] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.753 [2024-12-16 21:11:46.415729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.753 [2024-12-16 21:11:46.444482] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.753 [2024-12-16 21:11:46.444517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.039 spdk_app_start Round 2 00:06:00.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.039 21:11:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.039 21:11:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.039 21:11:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72107 /var/tmp/spdk-nbd.sock 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72107 ']' 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.039 21:11:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.039 21:11:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.039 Malloc0 00:06:00.298 21:11:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.298 Malloc1 00:06:00.298 21:11:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.298 21:11:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.557 /dev/nbd0 00:06:00.557 21:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.557 21:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.557 1+0 records in 00:06:00.557 1+0 records out 00:06:00.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247683 s, 16.5 MB/s 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.557 21:11:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.557 21:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.557 21:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.557 21:11:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.815 /dev/nbd1 00:06:00.815 21:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.815 21:11:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.815 1+0 records in 00:06:00.815 1+0 records out 00:06:00.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000137333 s, 29.8 MB/s 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.815 21:11:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.815 21:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.815 21:11:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.815 21:11:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.815 21:11:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.815 21:11:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.075 { 00:06:01.075 "nbd_device": "/dev/nbd0", 00:06:01.075 "bdev_name": "Malloc0" 00:06:01.075 }, 00:06:01.075 { 00:06:01.075 "nbd_device": "/dev/nbd1", 00:06:01.075 "bdev_name": "Malloc1" 00:06:01.075 } 00:06:01.075 ]' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.075 { 00:06:01.075 "nbd_device": "/dev/nbd0", 00:06:01.075 "bdev_name": "Malloc0" 00:06:01.075 }, 00:06:01.075 { 00:06:01.075 "nbd_device": "/dev/nbd1", 00:06:01.075 "bdev_name": "Malloc1" 00:06:01.075 } 00:06:01.075 ]' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.075 /dev/nbd1' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.075 /dev/nbd1' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.075 256+0 records in 00:06:01.075 256+0 records out 00:06:01.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114569 s, 91.5 MB/s 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.075 256+0 records in 00:06:01.075 256+0 records out 00:06:01.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160101 s, 65.5 MB/s 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.075 256+0 records in 00:06:01.075 256+0 records out 00:06:01.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152591 s, 68.7 MB/s 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.075 21:11:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.333 21:11:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.592 21:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.850 21:11:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.850 21:11:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.109 21:11:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.109 [2024-12-16 21:11:51.665247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.109 [2024-12-16 21:11:51.680148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.109 [2024-12-16 21:11:51.680151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.109 [2024-12-16 21:11:51.708470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.109 [2024-12-16 21:11:51.708510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.396 21:11:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72107 /var/tmp/spdk-nbd.sock 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72107 ']' 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.396 21:11:54 event.app_repeat -- event/event.sh@39 -- # killprocess 72107 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72107 ']' 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72107 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72107 00:06:05.396 killing process with pid 72107 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72107' 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72107 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72107 00:06:05.396 spdk_app_start is called in Round 0. 00:06:05.396 Shutdown signal received, stop current app iteration 00:06:05.396 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:05.396 spdk_app_start is called in Round 1. 00:06:05.396 Shutdown signal received, stop current app iteration 00:06:05.396 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:05.396 spdk_app_start is called in Round 2. 00:06:05.396 Shutdown signal received, stop current app iteration 00:06:05.396 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:05.396 spdk_app_start is called in Round 3. 00:06:05.396 Shutdown signal received, stop current app iteration 00:06:05.396 21:11:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.396 21:11:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:05.396 00:06:05.396 real 0m16.865s 00:06:05.396 user 0m37.740s 00:06:05.396 sys 0m2.073s 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.396 21:11:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.396 ************************************ 00:06:05.396 END TEST app_repeat 00:06:05.396 ************************************ 00:06:05.396 21:11:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.396 21:11:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:05.396 21:11:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.396 21:11:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.396 21:11:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.396 ************************************ 00:06:05.396 START TEST cpu_locks 00:06:05.396 ************************************ 00:06:05.396 21:11:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:05.396 * Looking for test storage... 00:06:05.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:05.396 21:11:55 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.396 21:11:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.396 21:11:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.655 21:11:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.655 --rc genhtml_branch_coverage=1 00:06:05.655 --rc genhtml_function_coverage=1 00:06:05.655 --rc genhtml_legend=1 00:06:05.655 --rc geninfo_all_blocks=1 00:06:05.655 --rc geninfo_unexecuted_blocks=1 00:06:05.655 00:06:05.655 ' 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.655 --rc genhtml_branch_coverage=1 00:06:05.655 --rc genhtml_function_coverage=1 00:06:05.655 --rc genhtml_legend=1 00:06:05.655 --rc geninfo_all_blocks=1 00:06:05.655 --rc geninfo_unexecuted_blocks=1 00:06:05.655 00:06:05.655 ' 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.655 --rc genhtml_branch_coverage=1 00:06:05.655 --rc genhtml_function_coverage=1 00:06:05.655 --rc genhtml_legend=1 00:06:05.655 --rc geninfo_all_blocks=1 00:06:05.655 --rc geninfo_unexecuted_blocks=1 00:06:05.655 00:06:05.655 ' 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.655 --rc genhtml_branch_coverage=1 00:06:05.655 --rc genhtml_function_coverage=1 00:06:05.655 --rc genhtml_legend=1 00:06:05.655 --rc geninfo_all_blocks=1 00:06:05.655 --rc geninfo_unexecuted_blocks=1 00:06:05.655 00:06:05.655 ' 00:06:05.655 21:11:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.655 21:11:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.655 21:11:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.655 21:11:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.655 21:11:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.655 ************************************ 00:06:05.655 START TEST default_locks 00:06:05.655 ************************************ 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72527 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72527 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72527 ']' 00:06:05.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.655 21:11:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.655 [2024-12-16 21:11:55.211358] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:05.655 [2024-12-16 21:11:55.211480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72527 ] 00:06:05.655 [2024-12-16 21:11:55.353261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.914 [2024-12-16 21:11:55.369459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.480 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.480 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:06.480 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72527 00:06:06.480 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.480 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72527 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72527 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72527 ']' 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72527 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72527 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.739 killing process with pid 72527 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72527' 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72527 00:06:06.739 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72527 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72527 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72527 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72527 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72527 ']' 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.000 ERROR: process (pid: 72527) is no longer running 00:06:07.000 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72527) - No such process 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.000 21:11:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.001 00:06:07.001 real 0m1.377s 00:06:07.001 user 0m1.446s 00:06:07.001 sys 0m0.379s 00:06:07.001 ************************************ 00:06:07.001 END TEST default_locks 00:06:07.001 ************************************ 00:06:07.001 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.001 21:11:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.001 21:11:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.001 21:11:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.001 21:11:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.001 21:11:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.001 ************************************ 00:06:07.001 START TEST default_locks_via_rpc 00:06:07.001 ************************************ 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72569 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72569 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72569 ']' 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.001 21:11:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.001 [2024-12-16 21:11:56.648864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:07.001 [2024-12-16 21:11:56.648985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72569 ] 00:06:07.260 [2024-12-16 21:11:56.790744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.260 [2024-12-16 21:11:56.806898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72569 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.856 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72569 00:06:08.139 21:11:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72569 00:06:08.139 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72569 ']' 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72569 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72569 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.140 killing process with pid 72569 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72569' 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72569 00:06:08.140 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72569 00:06:08.404 00:06:08.404 real 0m1.333s 00:06:08.404 user 0m1.397s 00:06:08.404 sys 0m0.371s 00:06:08.404 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.404 ************************************ 00:06:08.404 END TEST default_locks_via_rpc 00:06:08.404 ************************************ 00:06:08.404 21:11:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.404 21:11:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.404 21:11:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.404 21:11:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.404 21:11:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.404 ************************************ 00:06:08.404 START TEST non_locking_app_on_locked_coremask 00:06:08.404 ************************************ 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72621 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72621 /var/tmp/spdk.sock 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72621 ']' 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.404 21:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.404 [2024-12-16 21:11:58.042188] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:08.404 [2024-12-16 21:11:58.042313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72621 ] 00:06:08.662 [2024-12-16 21:11:58.182492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.662 [2024-12-16 21:11:58.198489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72637 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72637 /var/tmp/spdk2.sock 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72637 ']' 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.228 21:11:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.228 [2024-12-16 21:11:58.897078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:09.228 [2024-12-16 21:11:58.897465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72637 ] 00:06:09.486 [2024-12-16 21:11:59.044079] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.486 [2024-12-16 21:11:59.044121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.486 [2024-12-16 21:11:59.076316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.052 21:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.052 21:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.052 21:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72621 00:06:10.052 21:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72621 00:06:10.052 21:11:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.311 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72621 00:06:10.311 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72621 ']' 00:06:10.311 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72621 00:06:10.311 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.568 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.568 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72621 00:06:10.568 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.568 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.568 killing process with pid 72621 00:06:10.568 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72621' 00:06:10.568 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72621 00:06:10.568 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72621 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72637 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72637 ']' 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72637 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72637 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.825 killing process with pid 72637 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72637' 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72637 00:06:10.825 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72637 00:06:11.083 00:06:11.083 real 0m2.720s 00:06:11.083 user 0m3.021s 00:06:11.083 sys 0m0.703s 00:06:11.083 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.083 21:12:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.083 ************************************ 00:06:11.083 END TEST non_locking_app_on_locked_coremask 00:06:11.083 ************************************ 00:06:11.083 21:12:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.083 21:12:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.083 21:12:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.083 21:12:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.083 ************************************ 00:06:11.083 START TEST locking_app_on_unlocked_coremask 00:06:11.083 ************************************ 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72694 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72694 /var/tmp/spdk.sock 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72694 ']' 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.083 21:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.342 [2024-12-16 21:12:00.829395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:11.342 [2024-12-16 21:12:00.829517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72694 ] 00:06:11.342 [2024-12-16 21:12:00.970674] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.342 [2024-12-16 21:12:00.970723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.342 [2024-12-16 21:12:00.992985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.275 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.275 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.275 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72700 00:06:12.275 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72700 /var/tmp/spdk2.sock 00:06:12.275 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72700 ']' 00:06:12.276 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.276 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.276 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.276 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.276 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.276 21:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.276 [2024-12-16 21:12:01.730584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:12.276 [2024-12-16 21:12:01.730703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72700 ] 00:06:12.276 [2024-12-16 21:12:01.877196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.276 [2024-12-16 21:12:01.909409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72700 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72700 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72694 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72694 ']' 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72694 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.209 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72694 00:06:13.467 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.467 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.467 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72694' 00:06:13.467 killing process with pid 72694 00:06:13.467 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72694 00:06:13.467 21:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72694 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72700 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72700 ']' 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72700 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72700 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.726 killing process with pid 72700 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72700' 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72700 00:06:13.726 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72700 00:06:13.985 00:06:13.985 real 0m2.837s 00:06:13.985 user 0m3.161s 00:06:13.985 sys 0m0.752s 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.985 ************************************ 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.985 END TEST locking_app_on_unlocked_coremask 00:06:13.985 ************************************ 00:06:13.985 21:12:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.985 21:12:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.985 21:12:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.985 21:12:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.985 ************************************ 00:06:13.985 START TEST locking_app_on_locked_coremask 00:06:13.985 ************************************ 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72758 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72758 /var/tmp/spdk.sock 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72758 ']' 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.985 21:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.246 [2024-12-16 21:12:03.723916] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:14.246 [2024-12-16 21:12:03.724034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:06:14.246 [2024-12-16 21:12:03.858853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.246 [2024-12-16 21:12:03.875571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72774 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72774 /var/tmp/spdk2.sock 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72774 /var/tmp/spdk2.sock 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.813 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72774 /var/tmp/spdk2.sock 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72774 ']' 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.072 21:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.072 [2024-12-16 21:12:04.573110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:15.072 [2024-12-16 21:12:04.573225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72774 ] 00:06:15.072 [2024-12-16 21:12:04.720057] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72758 has claimed it. 00:06:15.072 [2024-12-16 21:12:04.720111] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.638 ERROR: process (pid: 72774) is no longer running 00:06:15.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72774) - No such process 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72758 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72758 00:06:15.638 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.897 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72758 00:06:15.897 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72758 ']' 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72758 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72758 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.898 killing process with pid 72758 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72758' 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72758 00:06:15.898 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72758 00:06:16.158 00:06:16.158 real 0m1.965s 00:06:16.158 user 0m2.187s 00:06:16.158 sys 0m0.455s 00:06:16.158 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.158 ************************************ 00:06:16.158 END TEST locking_app_on_locked_coremask 00:06:16.158 ************************************ 00:06:16.158 21:12:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 21:12:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.158 21:12:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.158 21:12:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.158 21:12:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 ************************************ 00:06:16.158 START TEST locking_overlapped_coremask 00:06:16.158 ************************************ 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72816 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72816 /var/tmp/spdk.sock 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72816 ']' 00:06:16.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 21:12:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.158 [2024-12-16 21:12:05.751279] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:16.158 [2024-12-16 21:12:05.751397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72816 ] 00:06:16.453 [2024-12-16 21:12:05.899005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.453 [2024-12-16 21:12:05.920150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.453 [2024-12-16 21:12:05.920501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.453 [2024-12-16 21:12:05.920574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72834 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72834 /var/tmp/spdk2.sock 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72834 /var/tmp/spdk2.sock 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:17.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72834 /var/tmp/spdk2.sock 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72834 ']' 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.027 21:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.027 [2024-12-16 21:12:06.653896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:17.027 [2024-12-16 21:12:06.654183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72834 ] 00:06:17.289 [2024-12-16 21:12:06.814096] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72816 has claimed it. 00:06:17.289 [2024-12-16 21:12:06.814148] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.857 ERROR: process (pid: 72834) is no longer running 00:06:17.857 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72834) - No such process 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72816 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72816 ']' 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72816 00:06:17.857 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72816 00:06:17.858 killing process with pid 72816 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72816' 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72816 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72816 00:06:17.858 ************************************ 00:06:17.858 END TEST locking_overlapped_coremask 00:06:17.858 ************************************ 00:06:17.858 00:06:17.858 real 0m1.858s 00:06:17.858 user 0m5.193s 00:06:17.858 sys 0m0.381s 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.858 21:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.120 21:12:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:18.120 21:12:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.120 21:12:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.120 21:12:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.120 ************************************ 00:06:18.120 START TEST locking_overlapped_coremask_via_rpc 00:06:18.120 ************************************ 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72876 00:06:18.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72876 /var/tmp/spdk.sock 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72876 ']' 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.120 21:12:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.120 [2024-12-16 21:12:07.667315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:18.120 [2024-12-16 21:12:07.667438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72876 ] 00:06:18.120 [2024-12-16 21:12:07.812644] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.120 [2024-12-16 21:12:07.812685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.381 [2024-12-16 21:12:07.833682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.381 [2024-12-16 21:12:07.833833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.381 [2024-12-16 21:12:07.833918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72894 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72894 /var/tmp/spdk2.sock 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72894 ']' 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.953 21:12:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.953 [2024-12-16 21:12:08.589090] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:18.953 [2024-12-16 21:12:08.589490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72894 ] 00:06:19.215 [2024-12-16 21:12:08.758494] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.215 [2024-12-16 21:12:08.758544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.215 [2024-12-16 21:12:08.799480] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.215 [2024-12-16 21:12:08.802819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.215 [2024-12-16 21:12:08.802896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.783 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.783 [2024-12-16 21:12:09.430741] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72876 has claimed it. 00:06:19.783 request: 00:06:19.783 { 00:06:19.784 "method": "framework_enable_cpumask_locks", 00:06:19.784 "req_id": 1 00:06:19.784 } 00:06:19.784 Got JSON-RPC error response 00:06:19.784 response: 00:06:19.784 { 00:06:19.784 "code": -32603, 00:06:19.784 "message": "Failed to claim CPU core: 2" 00:06:19.784 } 00:06:19.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72876 /var/tmp/spdk.sock 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72876 ']' 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.784 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72894 /var/tmp/spdk2.sock 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72894 ']' 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.042 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.301 ************************************ 00:06:20.301 END TEST locking_overlapped_coremask_via_rpc 00:06:20.301 ************************************ 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.301 00:06:20.301 real 0m2.225s 00:06:20.301 user 0m1.025s 00:06:20.301 sys 0m0.128s 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.301 21:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.301 21:12:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.301 21:12:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72876 ]] 00:06:20.301 21:12:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72876 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72876 ']' 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72876 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72876 00:06:20.301 killing process with pid 72876 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72876' 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72876 00:06:20.301 21:12:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72876 00:06:20.560 21:12:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72894 ]] 00:06:20.560 21:12:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72894 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72894 ']' 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72894 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72894 00:06:20.560 killing process with pid 72894 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72894' 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72894 00:06:20.560 21:12:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72894 00:06:20.819 21:12:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.819 Process with pid 72876 is not found 00:06:20.819 21:12:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.819 21:12:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72876 ]] 00:06:20.819 21:12:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72876 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72876 ']' 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72876 00:06:20.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72876) - No such process 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72876 is not found' 00:06:20.819 21:12:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72894 ]] 00:06:20.819 21:12:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72894 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72894 ']' 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72894 00:06:20.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72894) - No such process 00:06:20.819 Process with pid 72894 is not found 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72894 is not found' 00:06:20.819 21:12:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.819 00:06:20.819 real 0m15.351s 00:06:20.819 user 0m27.260s 00:06:20.819 sys 0m3.891s 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.819 21:12:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.819 ************************************ 00:06:20.819 END TEST cpu_locks 00:06:20.819 ************************************ 00:06:20.819 ************************************ 00:06:20.819 END TEST event 00:06:20.819 ************************************ 00:06:20.819 00:06:20.819 real 0m40.808s 00:06:20.819 user 1m19.471s 00:06:20.819 sys 0m6.723s 00:06:20.819 21:12:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.819 21:12:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.819 21:12:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.819 21:12:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.819 21:12:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.819 21:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:20.819 ************************************ 00:06:20.819 START TEST thread 00:06:20.819 ************************************ 00:06:20.819 21:12:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.819 * Looking for test storage... 00:06:20.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:20.819 21:12:10 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.819 21:12:10 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.819 21:12:10 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.819 21:12:10 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.819 21:12:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.819 21:12:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.819 21:12:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.819 21:12:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.819 21:12:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.819 21:12:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.819 21:12:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.819 21:12:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.819 21:12:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.819 21:12:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.819 21:12:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.819 21:12:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:20.819 21:12:10 thread -- scripts/common.sh@345 -- # : 1 00:06:20.819 21:12:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.819 21:12:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.819 21:12:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:20.819 21:12:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:20.819 21:12:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.819 21:12:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:20.819 21:12:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.819 21:12:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:21.080 21:12:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:21.080 21:12:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.080 21:12:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:21.080 21:12:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.080 21:12:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.080 21:12:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.080 21:12:10 thread -- scripts/common.sh@368 -- # return 0 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.081 --rc genhtml_branch_coverage=1 00:06:21.081 --rc genhtml_function_coverage=1 00:06:21.081 --rc genhtml_legend=1 00:06:21.081 --rc geninfo_all_blocks=1 00:06:21.081 --rc geninfo_unexecuted_blocks=1 00:06:21.081 00:06:21.081 ' 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.081 --rc genhtml_branch_coverage=1 00:06:21.081 --rc genhtml_function_coverage=1 00:06:21.081 --rc genhtml_legend=1 00:06:21.081 --rc geninfo_all_blocks=1 00:06:21.081 --rc geninfo_unexecuted_blocks=1 00:06:21.081 00:06:21.081 ' 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.081 --rc genhtml_branch_coverage=1 00:06:21.081 --rc genhtml_function_coverage=1 00:06:21.081 --rc genhtml_legend=1 00:06:21.081 --rc geninfo_all_blocks=1 00:06:21.081 --rc geninfo_unexecuted_blocks=1 00:06:21.081 00:06:21.081 ' 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.081 --rc genhtml_branch_coverage=1 00:06:21.081 --rc genhtml_function_coverage=1 00:06:21.081 --rc genhtml_legend=1 00:06:21.081 --rc geninfo_all_blocks=1 00:06:21.081 --rc geninfo_unexecuted_blocks=1 00:06:21.081 00:06:21.081 ' 00:06:21.081 21:12:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.081 21:12:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.081 ************************************ 00:06:21.081 START TEST thread_poller_perf 00:06:21.081 ************************************ 00:06:21.081 21:12:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.081 [2024-12-16 21:12:10.561548] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.081 [2024-12-16 21:12:10.561725] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73021 ] 00:06:21.081 [2024-12-16 21:12:10.702384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.081 [2024-12-16 21:12:10.722237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.081 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.469 [2024-12-16T21:12:12.169Z] ====================================== 00:06:22.469 [2024-12-16T21:12:12.169Z] busy:2614934602 (cyc) 00:06:22.470 [2024-12-16T21:12:12.170Z] total_run_count: 306000 00:06:22.470 [2024-12-16T21:12:12.170Z] tsc_hz: 2600000000 (cyc) 00:06:22.470 [2024-12-16T21:12:12.170Z] ====================================== 00:06:22.470 [2024-12-16T21:12:12.170Z] poller_cost: 8545 (cyc), 3286 (nsec) 00:06:22.470 00:06:22.470 real 0m1.252s 00:06:22.470 user 0m1.094s 00:06:22.470 sys 0m0.051s 00:06:22.470 21:12:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.470 21:12:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.470 ************************************ 00:06:22.470 END TEST thread_poller_perf 00:06:22.470 ************************************ 00:06:22.470 21:12:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.470 21:12:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:22.470 21:12:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.470 21:12:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.470 ************************************ 00:06:22.470 START TEST thread_poller_perf 00:06:22.470 ************************************ 00:06:22.470 21:12:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.470 [2024-12-16 21:12:11.878864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:22.470 [2024-12-16 21:12:11.879188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73052 ] 00:06:22.470 [2024-12-16 21:12:12.024331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.470 [2024-12-16 21:12:12.052475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.470 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:23.415 [2024-12-16T21:12:13.115Z] ====================================== 00:06:23.415 [2024-12-16T21:12:13.115Z] busy:2604144234 (cyc) 00:06:23.415 [2024-12-16T21:12:13.115Z] total_run_count: 3635000 00:06:23.415 [2024-12-16T21:12:13.115Z] tsc_hz: 2600000000 (cyc) 00:06:23.415 [2024-12-16T21:12:13.115Z] ====================================== 00:06:23.415 [2024-12-16T21:12:13.115Z] poller_cost: 716 (cyc), 275 (nsec) 00:06:23.415 ************************************ 00:06:23.415 END TEST thread_poller_perf 00:06:23.415 ************************************ 00:06:23.415 00:06:23.415 real 0m1.257s 00:06:23.415 user 0m1.093s 00:06:23.415 sys 0m0.056s 00:06:23.415 21:12:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.415 21:12:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.675 21:12:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:23.675 ************************************ 00:06:23.675 END TEST thread 00:06:23.675 ************************************ 00:06:23.675 00:06:23.675 real 0m2.757s 00:06:23.675 user 0m2.278s 00:06:23.675 sys 0m0.235s 00:06:23.675 21:12:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.675 21:12:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.675 21:12:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:23.675 21:12:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:23.675 21:12:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.675 21:12:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.675 21:12:13 -- common/autotest_common.sh@10 -- # set +x 00:06:23.675 ************************************ 00:06:23.675 START TEST app_cmdline 00:06:23.675 ************************************ 00:06:23.675 21:12:13 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:23.675 * Looking for test storage... 00:06:23.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:23.675 21:12:13 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.675 21:12:13 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.675 21:12:13 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.675 21:12:13 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.675 21:12:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.675 21:12:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.675 21:12:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:23.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.676 21:12:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 21:12:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:23.676 21:12:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73141 00:06:23.676 21:12:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73141 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73141 ']' 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.676 21:12:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.676 21:12:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:23.937 [2024-12-16 21:12:13.417321] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:23.937 [2024-12-16 21:12:13.417906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73141 ] 00:06:23.937 [2024-12-16 21:12:13.559244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.937 [2024-12-16 21:12:13.588878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.882 21:12:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.882 21:12:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:24.882 21:12:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:24.882 { 00:06:24.882 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:24.882 "fields": { 00:06:24.882 "major": 25, 00:06:24.882 "minor": 1, 00:06:24.882 "patch": 0, 00:06:24.883 "suffix": "-pre", 00:06:24.883 "commit": "e01cb43b8" 00:06:24.883 } 00:06:24.883 } 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:24.883 21:12:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:24.883 21:12:14 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.144 request: 00:06:25.144 { 00:06:25.144 "method": "env_dpdk_get_mem_stats", 00:06:25.144 "req_id": 1 00:06:25.144 } 00:06:25.144 Got JSON-RPC error response 00:06:25.144 response: 00:06:25.145 { 00:06:25.145 "code": -32601, 00:06:25.145 "message": "Method not found" 00:06:25.145 } 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.145 21:12:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73141 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73141 ']' 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73141 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73141 00:06:25.145 killing process with pid 73141 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73141' 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@973 -- # kill 73141 00:06:25.145 21:12:14 app_cmdline -- common/autotest_common.sh@978 -- # wait 73141 00:06:25.405 ************************************ 00:06:25.405 END TEST app_cmdline 00:06:25.405 ************************************ 00:06:25.405 00:06:25.405 real 0m1.848s 00:06:25.405 user 0m2.174s 00:06:25.405 sys 0m0.455s 00:06:25.405 21:12:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.405 21:12:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 21:12:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:25.405 21:12:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.405 21:12:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.405 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.665 ************************************ 00:06:25.665 START TEST version 00:06:25.665 ************************************ 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:25.665 * Looking for test storage... 00:06:25.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.665 21:12:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.665 21:12:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.665 21:12:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.665 21:12:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.665 21:12:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.665 21:12:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.665 21:12:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.665 21:12:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.665 21:12:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.665 21:12:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.665 21:12:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.665 21:12:15 version -- scripts/common.sh@344 -- # case "$op" in 00:06:25.665 21:12:15 version -- scripts/common.sh@345 -- # : 1 00:06:25.665 21:12:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.665 21:12:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.665 21:12:15 version -- scripts/common.sh@365 -- # decimal 1 00:06:25.665 21:12:15 version -- scripts/common.sh@353 -- # local d=1 00:06:25.665 21:12:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.665 21:12:15 version -- scripts/common.sh@355 -- # echo 1 00:06:25.665 21:12:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.665 21:12:15 version -- scripts/common.sh@366 -- # decimal 2 00:06:25.665 21:12:15 version -- scripts/common.sh@353 -- # local d=2 00:06:25.665 21:12:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.665 21:12:15 version -- scripts/common.sh@355 -- # echo 2 00:06:25.665 21:12:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.665 21:12:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.665 21:12:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.665 21:12:15 version -- scripts/common.sh@368 -- # return 0 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.665 --rc genhtml_branch_coverage=1 00:06:25.665 --rc genhtml_function_coverage=1 00:06:25.665 --rc genhtml_legend=1 00:06:25.665 --rc geninfo_all_blocks=1 00:06:25.665 --rc geninfo_unexecuted_blocks=1 00:06:25.665 00:06:25.665 ' 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.665 --rc genhtml_branch_coverage=1 00:06:25.665 --rc genhtml_function_coverage=1 00:06:25.665 --rc genhtml_legend=1 00:06:25.665 --rc geninfo_all_blocks=1 00:06:25.665 --rc geninfo_unexecuted_blocks=1 00:06:25.665 00:06:25.665 ' 00:06:25.665 21:12:15 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.665 --rc genhtml_branch_coverage=1 00:06:25.665 --rc genhtml_function_coverage=1 00:06:25.665 --rc genhtml_legend=1 00:06:25.665 --rc geninfo_all_blocks=1 00:06:25.665 --rc geninfo_unexecuted_blocks=1 00:06:25.665 00:06:25.665 ' 00:06:25.666 21:12:15 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.666 --rc genhtml_branch_coverage=1 00:06:25.666 --rc genhtml_function_coverage=1 00:06:25.666 --rc genhtml_legend=1 00:06:25.666 --rc geninfo_all_blocks=1 00:06:25.666 --rc geninfo_unexecuted_blocks=1 00:06:25.666 00:06:25.666 ' 00:06:25.666 21:12:15 version -- app/version.sh@17 -- # get_header_version major 00:06:25.666 21:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.666 21:12:15 version -- app/version.sh@17 -- # major=25 00:06:25.666 21:12:15 version -- app/version.sh@18 -- # get_header_version minor 00:06:25.666 21:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.666 21:12:15 version -- app/version.sh@18 -- # minor=1 00:06:25.666 21:12:15 version -- app/version.sh@19 -- # get_header_version patch 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.666 21:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:25.666 21:12:15 version -- app/version.sh@19 -- # patch=0 00:06:25.666 21:12:15 version -- app/version.sh@20 -- # get_header_version suffix 00:06:25.666 21:12:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.666 21:12:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.666 21:12:15 version -- app/version.sh@20 -- # suffix=-pre 00:06:25.666 21:12:15 version -- app/version.sh@22 -- # version=25.1 00:06:25.666 21:12:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:25.666 21:12:15 version -- app/version.sh@28 -- # version=25.1rc0 00:06:25.666 21:12:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:25.666 21:12:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:25.666 21:12:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:25.666 21:12:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:25.666 ************************************ 00:06:25.666 END TEST version 00:06:25.666 ************************************ 00:06:25.666 00:06:25.666 real 0m0.198s 00:06:25.666 user 0m0.122s 00:06:25.666 sys 0m0.099s 00:06:25.666 21:12:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.666 21:12:15 version -- common/autotest_common.sh@10 -- # set +x 00:06:25.666 21:12:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:25.666 21:12:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:25.666 21:12:15 -- spdk/autotest.sh@194 -- # uname -s 00:06:25.666 21:12:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:25.666 21:12:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.666 21:12:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.666 21:12:15 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:25.666 21:12:15 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:25.666 21:12:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.666 21:12:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.666 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.927 ************************************ 00:06:25.927 START TEST blockdev_nvme 00:06:25.927 ************************************ 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:25.927 * Looking for test storage... 00:06:25.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.927 21:12:15 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.927 --rc genhtml_branch_coverage=1 00:06:25.927 --rc genhtml_function_coverage=1 00:06:25.927 --rc genhtml_legend=1 00:06:25.927 --rc geninfo_all_blocks=1 00:06:25.927 --rc geninfo_unexecuted_blocks=1 00:06:25.927 00:06:25.927 ' 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.927 --rc genhtml_branch_coverage=1 00:06:25.927 --rc genhtml_function_coverage=1 00:06:25.927 --rc genhtml_legend=1 00:06:25.927 --rc geninfo_all_blocks=1 00:06:25.927 --rc geninfo_unexecuted_blocks=1 00:06:25.927 00:06:25.927 ' 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.927 --rc genhtml_branch_coverage=1 00:06:25.927 --rc genhtml_function_coverage=1 00:06:25.927 --rc genhtml_legend=1 00:06:25.927 --rc geninfo_all_blocks=1 00:06:25.927 --rc geninfo_unexecuted_blocks=1 00:06:25.927 00:06:25.927 ' 00:06:25.927 21:12:15 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.927 --rc genhtml_branch_coverage=1 00:06:25.927 --rc genhtml_function_coverage=1 00:06:25.927 --rc genhtml_legend=1 00:06:25.927 --rc geninfo_all_blocks=1 00:06:25.927 --rc geninfo_unexecuted_blocks=1 00:06:25.927 00:06:25.927 ' 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:25.927 21:12:15 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:25.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:25.927 21:12:15 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:25.928 21:12:15 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:25.928 21:12:15 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:25.928 21:12:15 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73302 00:06:25.928 21:12:15 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:25.928 21:12:15 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73302 00:06:25.928 21:12:15 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73302 ']' 00:06:25.928 21:12:15 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.928 21:12:15 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.928 21:12:15 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.928 21:12:15 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.928 21:12:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.928 21:12:15 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:25.928 [2024-12-16 21:12:15.617342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:25.928 [2024-12-16 21:12:15.617491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73302 ] 00:06:26.187 [2024-12-16 21:12:15.764887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.187 [2024-12-16 21:12:15.794018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:27.130 21:12:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.130 21:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.392 21:12:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.392 21:12:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.392 21:12:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:27.392 21:12:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:27.392 21:12:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 21:12:16 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.392 21:12:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:27.392 21:12:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:27.393 21:12:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fc5324da-dada-4546-b164-a7581b4abacf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fc5324da-dada-4546-b164-a7581b4abacf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "33fe509a-2120-4806-b17e-e2194a205940"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "33fe509a-2120-4806-b17e-e2194a205940",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "84a5a7ab-5835-4b16-a9ea-89ab7980124e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "84a5a7ab-5835-4b16-a9ea-89ab7980124e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b7432cdf-5fe9-49c6-a60b-656477cce97d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b7432cdf-5fe9-49c6-a60b-656477cce97d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "41b142e2-e80e-4ef0-ae35-7fad5cbad197"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "41b142e2-e80e-4ef0-ae35-7fad5cbad197",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "29bbe8c4-5c64-42ba-bed4-7b6e38b4b66c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "29bbe8c4-5c64-42ba-bed4-7b6e38b4b66c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:27.393 21:12:16 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:27.393 21:12:16 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:27.393 21:12:16 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:27.393 21:12:16 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 73302 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73302 ']' 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73302 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73302 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73302' 00:06:27.393 killing process with pid 73302 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73302 00:06:27.393 21:12:16 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73302 00:06:27.655 21:12:17 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:27.655 21:12:17 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:27.655 21:12:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:27.655 21:12:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.655 21:12:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.655 ************************************ 00:06:27.655 START TEST bdev_hello_world 00:06:27.655 ************************************ 00:06:27.655 21:12:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:27.917 [2024-12-16 21:12:17.369888] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:27.917 [2024-12-16 21:12:17.370016] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73374 ] 00:06:27.917 [2024-12-16 21:12:17.520818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.917 [2024-12-16 21:12:17.549650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.489 [2024-12-16 21:12:17.949808] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:28.489 [2024-12-16 21:12:17.949878] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:28.489 [2024-12-16 21:12:17.949903] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:28.490 [2024-12-16 21:12:17.952275] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:28.490 [2024-12-16 21:12:17.953401] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:28.490 [2024-12-16 21:12:17.953456] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:28.490 [2024-12-16 21:12:17.954110] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:28.490 00:06:28.490 [2024-12-16 21:12:17.954148] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:28.490 00:06:28.490 real 0m0.831s 00:06:28.490 user 0m0.538s 00:06:28.490 sys 0m0.187s 00:06:28.490 21:12:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.490 ************************************ 00:06:28.490 END TEST bdev_hello_world 00:06:28.490 ************************************ 00:06:28.490 21:12:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:28.490 21:12:18 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:28.490 21:12:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.490 21:12:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.490 21:12:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.751 ************************************ 00:06:28.751 START TEST bdev_bounds 00:06:28.751 ************************************ 00:06:28.751 21:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:28.751 21:12:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73401 00:06:28.751 21:12:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.752 Process bdevio pid: 73401 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73401' 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73401 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73401 ']' 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:28.752 21:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:28.752 [2024-12-16 21:12:18.265243] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:28.752 [2024-12-16 21:12:18.265847] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73401 ] 00:06:28.752 [2024-12-16 21:12:18.410524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.752 [2024-12-16 21:12:18.445045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.752 [2024-12-16 21:12:18.445247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.752 [2024-12-16 21:12:18.445284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.696 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.696 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:29.696 21:12:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:29.696 I/O targets: 00:06:29.696 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:29.696 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:29.696 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:29.696 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:29.696 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:29.696 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:29.696 00:06:29.696 00:06:29.696 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.696 http://cunit.sourceforge.net/ 00:06:29.696 00:06:29.696 00:06:29.696 Suite: bdevio tests on: Nvme3n1 00:06:29.696 Test: blockdev write read block ...passed 00:06:29.696 Test: blockdev write zeroes read block ...passed 00:06:29.696 Test: blockdev write zeroes read no split ...passed 00:06:29.696 Test: blockdev write zeroes read split ...passed 00:06:29.696 Test: blockdev write zeroes read split partial ...passed 00:06:29.696 Test: blockdev reset ...[2024-12-16 21:12:19.258912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:29.696 passed 00:06:29.696 Test: blockdev write read 8 blocks ...[2024-12-16 21:12:19.261527] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:29.696 passed 00:06:29.696 Test: blockdev write read size > 128k ...passed 00:06:29.696 Test: blockdev write read invalid size ...passed 00:06:29.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.696 Test: blockdev write read max offset ...passed 00:06:29.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.696 Test: blockdev writev readv 8 blocks ...passed 00:06:29.696 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.696 Test: blockdev writev readv block ...passed 00:06:29.696 Test: blockdev writev readv size > 128k ...passed 00:06:29.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.696 Test: blockdev comparev and writev ...[2024-12-16 21:12:19.279709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bde06000 len:0x1000 00:06:29.696 [2024-12-16 21:12:19.279788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.696 passed 00:06:29.696 Test: blockdev nvme passthru rw ...passed 00:06:29.696 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:19.281718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.696 passed 00:06:29.696 Test: blockdev nvme admin passthru ...[2024-12-16 21:12:19.281766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.696 passed 00:06:29.696 Test: blockdev copy ...passed 00:06:29.696 Suite: bdevio tests on: Nvme2n3 00:06:29.696 Test: blockdev write read block ...passed 00:06:29.696 Test: blockdev write zeroes read block ...passed 00:06:29.696 Test: blockdev write zeroes read no split ...passed 00:06:29.696 Test: blockdev write zeroes read split ...passed 00:06:29.696 Test: blockdev write zeroes read split partial ...passed 00:06:29.696 Test: blockdev reset ...[2024-12-16 21:12:19.313166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:29.696 passed 00:06:29.696 Test: blockdev write read 8 blocks ...[2024-12-16 21:12:19.316035] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:29.696 passed 00:06:29.696 Test: blockdev write read size > 128k ...passed 00:06:29.696 Test: blockdev write read invalid size ...passed 00:06:29.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.696 Test: blockdev write read max offset ...passed 00:06:29.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.696 Test: blockdev writev readv 8 blocks ...passed 00:06:29.696 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.696 Test: blockdev writev readv block ...passed 00:06:29.696 Test: blockdev writev readv size > 128k ...passed 00:06:29.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.696 Test: blockdev comparev and writev ...[2024-12-16 21:12:19.332566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bba02000 len:0x1000 00:06:29.696 [2024-12-16 21:12:19.332646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.696 passed 00:06:29.696 Test: blockdev nvme passthru rw ...passed 00:06:29.696 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:19.334957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.696 [2024-12-16 21:12:19.334996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.696 passed 00:06:29.696 Test: blockdev nvme admin passthru ...passed 00:06:29.696 Test: blockdev copy ...passed 00:06:29.696 Suite: bdevio tests on: Nvme2n2 00:06:29.696 Test: blockdev write read block ...passed 00:06:29.696 Test: blockdev write zeroes read block ...passed 00:06:29.696 Test: blockdev write zeroes read no split ...passed 00:06:29.696 Test: blockdev write zeroes read split ...passed 00:06:29.696 Test: blockdev write zeroes read split partial ...passed 00:06:29.696 Test: blockdev reset ...[2024-12-16 21:12:19.366858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:29.696 [2024-12-16 21:12:19.369265] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:29.696 passed 00:06:29.696 Test: blockdev write read 8 blocks ...passed 00:06:29.696 Test: blockdev write read size > 128k ...passed 00:06:29.696 Test: blockdev write read invalid size ...passed 00:06:29.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.696 Test: blockdev write read max offset ...passed 00:06:29.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.696 Test: blockdev writev readv 8 blocks ...passed 00:06:29.696 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.696 Test: blockdev writev readv block ...passed 00:06:29.696 Test: blockdev writev readv size > 128k ...passed 00:06:29.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.696 Test: blockdev comparev and writev ...[2024-12-16 21:12:19.385107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2e3b000 len:0x1000 00:06:29.696 [2024-12-16 21:12:19.385158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.696 passed 00:06:29.696 Test: blockdev nvme passthru rw ...passed 00:06:29.696 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:19.387987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.696 [2024-12-16 21:12:19.388026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.696 passed 00:06:29.958 Test: blockdev nvme admin passthru ...passed 00:06:29.958 Test: blockdev copy ...passed 00:06:29.958 Suite: bdevio tests on: Nvme2n1 00:06:29.958 Test: blockdev write read block ...passed 00:06:29.958 Test: blockdev write zeroes read block ...passed 00:06:29.958 Test: blockdev write zeroes read no split ...passed 00:06:29.958 Test: blockdev write zeroes read split ...passed 00:06:29.958 Test: blockdev write zeroes read split partial ...passed 00:06:29.958 Test: blockdev reset ...[2024-12-16 21:12:19.421770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:29.958 [2024-12-16 21:12:19.424939] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:29.958 passed 00:06:29.958 Test: blockdev write read 8 blocks ...passed 00:06:29.958 Test: blockdev write read size > 128k ...passed 00:06:29.958 Test: blockdev write read invalid size ...passed 00:06:29.958 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.958 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.958 Test: blockdev write read max offset ...passed 00:06:29.958 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.958 Test: blockdev writev readv 8 blocks ...passed 00:06:29.958 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.958 Test: blockdev writev readv block ...passed 00:06:29.958 Test: blockdev writev readv size > 128k ...passed 00:06:29.958 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.958 Test: blockdev comparev and writev ...[2024-12-16 21:12:19.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2e37000 len:0x1000 00:06:29.958 [2024-12-16 21:12:19.440918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.958 passed 00:06:29.958 Test: blockdev nvme passthru rw ...passed 00:06:29.958 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:19.441928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.958 [2024-12-16 21:12:19.441965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.958 passed 00:06:29.958 Test: blockdev nvme admin passthru ...passed 00:06:29.958 Test: blockdev copy ...passed 00:06:29.958 Suite: bdevio tests on: Nvme1n1 00:06:29.958 Test: blockdev write read block ...passed 00:06:29.958 Test: blockdev write zeroes read block ...passed 00:06:29.959 Test: blockdev write zeroes read no split ...passed 00:06:29.959 Test: blockdev write zeroes read split ...passed 00:06:29.959 Test: blockdev write zeroes read split partial ...passed 00:06:29.959 Test: blockdev reset ...[2024-12-16 21:12:19.476298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:29.959 [2024-12-16 21:12:19.479513] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:29.959 passed 00:06:29.959 Test: blockdev write read 8 blocks ...passed 00:06:29.959 Test: blockdev write read size > 128k ...passed 00:06:29.959 Test: blockdev write read invalid size ...passed 00:06:29.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.959 Test: blockdev write read max offset ...passed 00:06:29.959 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.959 Test: blockdev writev readv 8 blocks ...passed 00:06:29.959 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.959 Test: blockdev writev readv block ...passed 00:06:29.959 Test: blockdev writev readv size > 128k ...passed 00:06:29.959 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.959 Test: blockdev comparev and writev ...[2024-12-16 21:12:19.496777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2e33000 len:0x1000 00:06:29.959 [2024-12-16 21:12:19.496824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.959 passed 00:06:29.959 Test: blockdev nvme passthru rw ...passed 00:06:29.959 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:19.499541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.959 [2024-12-16 21:12:19.499581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.959 passed 00:06:29.959 Test: blockdev nvme admin passthru ...passed 00:06:29.959 Test: blockdev copy ...passed 00:06:29.959 Suite: bdevio tests on: Nvme0n1 00:06:29.959 Test: blockdev write read block ...passed 00:06:29.959 Test: blockdev write zeroes read block ...passed 00:06:29.959 Test: blockdev write zeroes read no split ...passed 00:06:29.959 Test: blockdev write zeroes read split ...passed 00:06:29.959 Test: blockdev write zeroes read split partial ...passed 00:06:29.959 Test: blockdev reset ...[2024-12-16 21:12:19.529151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:29.959 [2024-12-16 21:12:19.532028] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:29.959 passed 00:06:29.959 Test: blockdev write read 8 blocks ...passed 00:06:29.959 Test: blockdev write read size > 128k ...passed 00:06:29.959 Test: blockdev write read invalid size ...passed 00:06:29.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.959 Test: blockdev write read max offset ...passed 00:06:29.959 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.959 Test: blockdev writev readv 8 blocks ...passed 00:06:29.959 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.959 Test: blockdev writev readv block ...passed 00:06:29.959 Test: blockdev writev readv size > 128k ...passed 00:06:29.959 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.959 Test: blockdev comparev and writev ...passed 00:06:29.959 Test: blockdev nvme passthru rw ...[2024-12-16 21:12:19.546807] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:29.959 separate metadata which is not supported yet. 00:06:29.959 passed 00:06:29.959 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:19.548453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:29.959 [2024-12-16 21:12:19.548503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:29.959 passed 00:06:29.959 Test: blockdev nvme admin passthru ...passed 00:06:29.959 Test: blockdev copy ...passed 00:06:29.959 00:06:29.959 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.959 suites 6 6 n/a 0 0 00:06:29.959 tests 138 138 138 0 0 00:06:29.959 asserts 893 893 893 0 n/a 00:06:29.959 00:06:29.959 Elapsed time = 0.698 seconds 00:06:29.959 0 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73401 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73401 ']' 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73401 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73401 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73401' 00:06:29.959 killing process with pid 73401 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73401 00:06:29.959 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73401 00:06:30.221 21:12:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:30.221 00:06:30.221 real 0m1.560s 00:06:30.221 user 0m3.922s 00:06:30.221 sys 0m0.304s 00:06:30.221 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.221 21:12:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:30.221 ************************************ 00:06:30.221 END TEST bdev_bounds 00:06:30.221 ************************************ 00:06:30.221 21:12:19 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:30.221 21:12:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:30.221 21:12:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.221 21:12:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.221 ************************************ 00:06:30.221 START TEST bdev_nbd 00:06:30.221 ************************************ 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73449 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73449 /var/tmp/spdk-nbd.sock 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73449 ']' 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.221 21:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:30.221 [2024-12-16 21:12:19.901975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:30.221 [2024-12-16 21:12:19.902103] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.482 [2024-12-16 21:12:20.044859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.482 [2024-12-16 21:12:20.077184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:31.425 21:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.425 1+0 records in 00:06:31.425 1+0 records out 00:06:31.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120185 s, 3.4 MB/s 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.425 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.426 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:31.426 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:31.426 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:31.426 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.687 1+0 records in 00:06:31.687 1+0 records out 00:06:31.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00150565 s, 2.7 MB/s 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:31.687 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.949 1+0 records in 00:06:31.949 1+0 records out 00:06:31.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825153 s, 5.0 MB/s 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:31.949 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.211 1+0 records in 00:06:32.211 1+0 records out 00:06:32.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108411 s, 3.8 MB/s 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.211 21:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.473 1+0 records in 00:06:32.473 1+0 records out 00:06:32.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106488 s, 3.8 MB/s 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.473 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.734 1+0 records in 00:06:32.734 1+0 records out 00:06:32.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121931 s, 3.4 MB/s 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.734 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd0", 00:06:32.996 "bdev_name": "Nvme0n1" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd1", 00:06:32.996 "bdev_name": "Nvme1n1" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd2", 00:06:32.996 "bdev_name": "Nvme2n1" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd3", 00:06:32.996 "bdev_name": "Nvme2n2" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd4", 00:06:32.996 "bdev_name": "Nvme2n3" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd5", 00:06:32.996 "bdev_name": "Nvme3n1" 00:06:32.996 } 00:06:32.996 ]' 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd0", 00:06:32.996 "bdev_name": "Nvme0n1" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd1", 00:06:32.996 "bdev_name": "Nvme1n1" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd2", 00:06:32.996 "bdev_name": "Nvme2n1" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd3", 00:06:32.996 "bdev_name": "Nvme2n2" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd4", 00:06:32.996 "bdev_name": "Nvme2n3" 00:06:32.996 }, 00:06:32.996 { 00:06:32.996 "nbd_device": "/dev/nbd5", 00:06:32.996 "bdev_name": "Nvme3n1" 00:06:32.996 } 00:06:32.996 ]' 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.996 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.258 21:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.520 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.781 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.043 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.304 21:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:34.566 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:34.826 /dev/nbd0 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.826 1+0 records in 00:06:34.826 1+0 records out 00:06:34.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365455 s, 11.2 MB/s 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:34.826 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:35.085 /dev/nbd1 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.085 1+0 records in 00:06:35.085 1+0 records out 00:06:35.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436654 s, 9.4 MB/s 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.085 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:35.344 /dev/nbd10 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.344 1+0 records in 00:06:35.344 1+0 records out 00:06:35.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476002 s, 8.6 MB/s 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.344 21:12:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:35.601 /dev/nbd11 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.601 1+0 records in 00:06:35.601 1+0 records out 00:06:35.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043435 s, 9.4 MB/s 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.601 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:35.861 /dev/nbd12 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.861 1+0 records in 00:06:35.861 1+0 records out 00:06:35.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394047 s, 10.4 MB/s 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.861 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:35.861 /dev/nbd13 00:06:36.120 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:36.120 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:36.120 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:36.120 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:36.120 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.120 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.120 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.121 1+0 records in 00:06:36.121 1+0 records out 00:06:36.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392975 s, 10.4 MB/s 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd0", 00:06:36.121 "bdev_name": "Nvme0n1" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd1", 00:06:36.121 "bdev_name": "Nvme1n1" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd10", 00:06:36.121 "bdev_name": "Nvme2n1" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd11", 00:06:36.121 "bdev_name": "Nvme2n2" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd12", 00:06:36.121 "bdev_name": "Nvme2n3" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd13", 00:06:36.121 "bdev_name": "Nvme3n1" 00:06:36.121 } 00:06:36.121 ]' 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.121 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd0", 00:06:36.121 "bdev_name": "Nvme0n1" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd1", 00:06:36.121 "bdev_name": "Nvme1n1" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd10", 00:06:36.121 "bdev_name": "Nvme2n1" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd11", 00:06:36.121 "bdev_name": "Nvme2n2" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd12", 00:06:36.121 "bdev_name": "Nvme2n3" 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "nbd_device": "/dev/nbd13", 00:06:36.121 "bdev_name": "Nvme3n1" 00:06:36.121 } 00:06:36.121 ]' 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.382 /dev/nbd1 00:06:36.382 /dev/nbd10 00:06:36.382 /dev/nbd11 00:06:36.382 /dev/nbd12 00:06:36.382 /dev/nbd13' 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.382 /dev/nbd1 00:06:36.382 /dev/nbd10 00:06:36.382 /dev/nbd11 00:06:36.382 /dev/nbd12 00:06:36.382 /dev/nbd13' 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:36.382 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:36.383 256+0 records in 00:06:36.383 256+0 records out 00:06:36.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00890411 s, 118 MB/s 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.383 256+0 records in 00:06:36.383 256+0 records out 00:06:36.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0545368 s, 19.2 MB/s 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.383 21:12:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.383 256+0 records in 00:06:36.383 256+0 records out 00:06:36.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126367 s, 8.3 MB/s 00:06:36.383 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.383 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:36.645 256+0 records in 00:06:36.645 256+0 records out 00:06:36.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230446 s, 4.6 MB/s 00:06:36.645 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.645 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:36.906 256+0 records in 00:06:36.906 256+0 records out 00:06:36.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205042 s, 5.1 MB/s 00:06:36.906 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.906 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:37.168 256+0 records in 00:06:37.168 256+0 records out 00:06:37.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205668 s, 5.1 MB/s 00:06:37.168 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.168 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:37.431 256+0 records in 00:06:37.431 256+0 records out 00:06:37.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22596 s, 4.6 MB/s 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.431 21:12:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.693 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.955 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.217 21:12:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.476 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:38.734 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:38.734 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:38.734 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.735 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:38.993 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:39.251 malloc_lvol_verify 00:06:39.251 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:39.251 8432de81-d2fc-4751-8158-677200a94172 00:06:39.251 21:12:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:39.509 15dfddd5-bda9-458c-9419-ddd4f64bb33f 00:06:39.509 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:39.778 /dev/nbd0 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:39.778 mke2fs 1.47.0 (5-Feb-2023) 00:06:39.778 Discarding device blocks: 0/4096 done 00:06:39.778 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:39.778 00:06:39.778 Allocating group tables: 0/1 done 00:06:39.778 Writing inode tables: 0/1 done 00:06:39.778 Creating journal (1024 blocks): done 00:06:39.778 Writing superblocks and filesystem accounting information: 0/1 done 00:06:39.778 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.778 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73449 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73449 ']' 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73449 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73449 00:06:40.054 killing process with pid 73449 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.054 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73449' 00:06:40.055 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73449 00:06:40.055 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73449 00:06:40.055 21:12:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:40.055 00:06:40.055 real 0m9.924s 00:06:40.055 user 0m14.012s 00:06:40.055 sys 0m3.420s 00:06:40.055 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.055 21:12:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 ************************************ 00:06:40.055 END TEST bdev_nbd 00:06:40.055 ************************************ 00:06:40.313 21:12:29 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:40.313 21:12:29 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:40.313 skipping fio tests on NVMe due to multi-ns failures. 00:06:40.313 21:12:29 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:40.313 21:12:29 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:40.313 21:12:29 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:40.313 21:12:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:40.313 21:12:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.313 21:12:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.313 ************************************ 00:06:40.313 START TEST bdev_verify 00:06:40.313 ************************************ 00:06:40.313 21:12:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:40.313 [2024-12-16 21:12:29.862015] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:40.313 [2024-12-16 21:12:29.862108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73828 ] 00:06:40.313 [2024-12-16 21:12:29.999152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.571 [2024-12-16 21:12:30.018283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.571 [2024-12-16 21:12:30.018469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.831 Running I/O for 5 seconds... 00:06:43.159 18944.00 IOPS, 74.00 MiB/s [2024-12-16T21:12:33.800Z] 20352.00 IOPS, 79.50 MiB/s [2024-12-16T21:12:34.743Z] 20778.67 IOPS, 81.17 MiB/s [2024-12-16T21:12:35.688Z] 20624.00 IOPS, 80.56 MiB/s [2024-12-16T21:12:35.688Z] 20659.20 IOPS, 80.70 MiB/s 00:06:45.988 Latency(us) 00:06:45.988 [2024-12-16T21:12:35.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.988 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x0 length 0xbd0bd 00:06:45.988 Nvme0n1 : 5.06 1694.94 6.62 0.00 0.00 75266.52 12048.54 71383.83 00:06:45.988 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:45.988 Nvme0n1 : 5.04 1701.21 6.65 0.00 0.00 74985.37 10989.88 72997.02 00:06:45.988 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x0 length 0xa0000 00:06:45.988 Nvme1n1 : 5.06 1694.04 6.62 0.00 0.00 75078.46 15426.17 58881.58 00:06:45.988 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0xa0000 length 0xa0000 00:06:45.988 Nvme1n1 : 5.04 1700.68 6.64 0.00 0.00 74911.95 14115.45 70980.53 00:06:45.988 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x0 length 0x80000 00:06:45.988 Nvme2n1 : 5.06 1693.59 6.62 0.00 0.00 74927.70 16636.06 59284.87 00:06:45.988 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x80000 length 0x80000 00:06:45.988 Nvme2n1 : 5.05 1709.34 6.68 0.00 0.00 74468.86 4360.66 67754.14 00:06:45.988 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x0 length 0x80000 00:06:45.988 Nvme2n2 : 5.07 1693.11 6.61 0.00 0.00 74808.85 17039.36 61301.37 00:06:45.988 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x80000 length 0x80000 00:06:45.988 Nvme2n2 : 5.06 1708.64 6.67 0.00 0.00 74384.61 5520.15 62107.96 00:06:45.988 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x0 length 0x80000 00:06:45.988 Nvme2n3 : 5.08 1702.25 6.65 0.00 0.00 74357.56 4965.61 61704.66 00:06:45.988 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x80000 length 0x80000 00:06:45.988 Nvme2n3 : 5.06 1707.76 6.67 0.00 0.00 74284.01 7007.31 62107.96 00:06:45.988 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x0 length 0x20000 00:06:45.988 Nvme3n1 : 5.08 1701.80 6.65 0.00 0.00 74275.46 5268.09 61704.66 00:06:45.988 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:45.988 Verification LBA range: start 0x20000 length 0x20000 00:06:45.988 Nvme3n1 : 5.07 1717.64 6.71 0.00 0.00 73815.31 4234.63 63721.16 00:06:45.988 [2024-12-16T21:12:35.688Z] =================================================================================================================== 00:06:45.988 [2024-12-16T21:12:35.688Z] Total : 20425.01 79.79 0.00 0.00 74628.52 4234.63 72997.02 00:06:46.561 00:06:46.561 real 0m6.221s 00:06:46.561 user 0m11.810s 00:06:46.561 sys 0m0.164s 00:06:46.561 21:12:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.561 ************************************ 00:06:46.561 END TEST bdev_verify 00:06:46.561 ************************************ 00:06:46.561 21:12:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:46.561 21:12:36 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:46.561 21:12:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:46.561 21:12:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.561 21:12:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.561 ************************************ 00:06:46.561 START TEST bdev_verify_big_io 00:06:46.561 ************************************ 00:06:46.561 21:12:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:46.561 [2024-12-16 21:12:36.162723] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:46.561 [2024-12-16 21:12:36.163361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73915 ] 00:06:46.822 [2024-12-16 21:12:36.311035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.822 [2024-12-16 21:12:36.334091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.822 [2024-12-16 21:12:36.334127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.082 Running I/O for 5 seconds... 00:06:50.911 144.00 IOPS, 9.00 MiB/s [2024-12-16T21:12:42.573Z] 1659.50 IOPS, 103.72 MiB/s [2024-12-16T21:12:42.831Z] 2414.00 IOPS, 150.88 MiB/s 00:06:53.131 Latency(us) 00:06:53.131 [2024-12-16T21:12:42.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.131 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x0 length 0xbd0b 00:06:53.131 Nvme0n1 : 5.69 134.91 8.43 0.00 0.00 918612.48 24399.56 1064707.94 00:06:53.131 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:53.131 Nvme0n1 : 5.49 139.79 8.74 0.00 0.00 882956.80 18955.03 1077613.49 00:06:53.131 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x0 length 0xa000 00:06:53.131 Nvme1n1 : 5.69 134.87 8.43 0.00 0.00 888968.01 82676.18 890483.00 00:06:53.131 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0xa000 length 0xa000 00:06:53.131 Nvme1n1 : 5.61 140.56 8.78 0.00 0.00 843962.17 94775.14 884030.23 00:06:53.131 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x0 length 0x8000 00:06:53.131 Nvme2n1 : 5.80 137.14 8.57 0.00 0.00 849903.38 61704.66 1148594.02 00:06:53.131 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x8000 length 0x8000 00:06:53.131 Nvme2n1 : 5.81 150.54 9.41 0.00 0.00 769276.59 60898.07 706578.90 00:06:53.131 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x0 length 0x8000 00:06:53.131 Nvme2n2 : 5.81 143.30 8.96 0.00 0.00 790956.63 46177.67 809823.31 00:06:53.131 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x8000 length 0x8000 00:06:53.131 Nvme2n2 : 5.81 144.43 9.03 0.00 0.00 778057.43 55251.89 1509949.44 00:06:53.131 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x0 length 0x8000 00:06:53.131 Nvme2n3 : 5.87 152.73 9.55 0.00 0.00 720700.99 36901.81 832408.02 00:06:53.131 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x8000 length 0x8000 00:06:53.131 Nvme2n3 : 5.90 154.66 9.67 0.00 0.00 701485.55 34885.32 1548666.09 00:06:53.131 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x0 length 0x2000 00:06:53.131 Nvme3n1 : 5.93 172.67 10.79 0.00 0.00 619545.24 466.31 851766.35 00:06:53.131 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:53.131 Verification LBA range: start 0x2000 length 0x2000 00:06:53.131 Nvme3n1 : 5.94 179.99 11.25 0.00 0.00 589882.28 453.71 1574477.19 00:06:53.131 [2024-12-16T21:12:42.831Z] =================================================================================================================== 00:06:53.131 [2024-12-16T21:12:42.831Z] Total : 1785.58 111.60 0.00 0.00 768487.14 453.71 1574477.19 00:06:54.065 00:06:54.065 real 0m7.338s 00:06:54.065 user 0m13.985s 00:06:54.065 sys 0m0.213s 00:06:54.065 21:12:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.065 ************************************ 00:06:54.065 END TEST bdev_verify_big_io 00:06:54.065 ************************************ 00:06:54.065 21:12:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:54.065 21:12:43 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.065 21:12:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:54.065 21:12:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.065 21:12:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.065 ************************************ 00:06:54.065 START TEST bdev_write_zeroes 00:06:54.065 ************************************ 00:06:54.065 21:12:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.065 [2024-12-16 21:12:43.538317] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:54.065 [2024-12-16 21:12:43.538407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74016 ] 00:06:54.065 [2024-12-16 21:12:43.675343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.065 [2024-12-16 21:12:43.691792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.632 Running I/O for 1 seconds... 00:06:55.564 2213.00 IOPS, 8.64 MiB/s 00:06:55.564 Latency(us) 00:06:55.564 [2024-12-16T21:12:45.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.565 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:55.565 Nvme0n1 : 1.03 285.81 1.12 0.00 0.00 445842.32 7662.67 1013085.74 00:06:55.565 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:55.565 Nvme1n1 : 1.02 624.73 2.44 0.00 0.00 204509.81 7965.14 464599.83 00:06:55.565 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:55.565 Nvme2n1 : 1.02 501.67 1.96 0.00 0.00 253618.41 55655.19 464599.83 00:06:55.565 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:55.565 Nvme2n2 : 1.02 501.24 1.96 0.00 0.00 253251.74 56058.49 461373.44 00:06:55.565 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:55.565 Nvme2n3 : 1.02 500.78 1.96 0.00 0.00 253187.94 54848.59 464599.83 00:06:55.565 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:55.565 Nvme3n1 : 1.02 500.34 1.95 0.00 0.00 253317.12 53638.70 464599.83 00:06:55.565 [2024-12-16T21:12:45.265Z] =================================================================================================================== 00:06:55.565 [2024-12-16T21:12:45.265Z] Total : 2914.56 11.38 0.00 0.00 261780.00 7662.67 1013085.74 00:06:55.822 00:06:55.822 real 0m1.798s 00:06:55.822 user 0m1.547s 00:06:55.822 sys 0m0.144s 00:06:55.822 21:12:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.822 21:12:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:55.822 ************************************ 00:06:55.822 END TEST bdev_write_zeroes 00:06:55.822 ************************************ 00:06:55.822 21:12:45 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.822 21:12:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:55.822 21:12:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.822 21:12:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.822 ************************************ 00:06:55.822 START TEST bdev_json_nonenclosed 00:06:55.822 ************************************ 00:06:55.822 21:12:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.822 [2024-12-16 21:12:45.397303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:55.822 [2024-12-16 21:12:45.397419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74053 ] 00:06:56.081 [2024-12-16 21:12:45.537593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.081 [2024-12-16 21:12:45.558469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.081 [2024-12-16 21:12:45.558547] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:56.081 [2024-12-16 21:12:45.558559] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:56.081 [2024-12-16 21:12:45.558570] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.081 00:06:56.081 real 0m0.275s 00:06:56.081 user 0m0.096s 00:06:56.081 sys 0m0.076s 00:06:56.081 21:12:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.081 ************************************ 00:06:56.081 END TEST bdev_json_nonenclosed 00:06:56.081 ************************************ 00:06:56.081 21:12:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:56.081 21:12:45 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:56.081 21:12:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:56.081 21:12:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.081 21:12:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.081 ************************************ 00:06:56.081 START TEST bdev_json_nonarray 00:06:56.081 ************************************ 00:06:56.081 21:12:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:56.081 [2024-12-16 21:12:45.726403] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:56.081 [2024-12-16 21:12:45.726528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74078 ] 00:06:56.342 [2024-12-16 21:12:45.868451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.342 [2024-12-16 21:12:45.886221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.342 [2024-12-16 21:12:45.886294] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:56.342 [2024-12-16 21:12:45.886307] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:56.342 [2024-12-16 21:12:45.886317] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.342 00:06:56.342 real 0m0.273s 00:06:56.342 user 0m0.107s 00:06:56.342 sys 0m0.063s 00:06:56.342 ************************************ 00:06:56.342 END TEST bdev_json_nonarray 00:06:56.342 ************************************ 00:06:56.342 21:12:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.342 21:12:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:56.342 21:12:45 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:56.342 00:06:56.342 real 0m30.615s 00:06:56.342 user 0m48.071s 00:06:56.342 sys 0m5.323s 00:06:56.342 21:12:45 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.342 ************************************ 00:06:56.342 END TEST blockdev_nvme 00:06:56.342 ************************************ 00:06:56.342 21:12:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.342 21:12:46 -- spdk/autotest.sh@209 -- # uname -s 00:06:56.342 21:12:46 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:56.342 21:12:46 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:56.342 21:12:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.342 21:12:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.342 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:06:56.342 ************************************ 00:06:56.342 START TEST blockdev_nvme_gpt 00:06:56.342 ************************************ 00:06:56.342 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:56.601 * Looking for test storage... 00:06:56.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.601 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.601 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.601 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.601 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.601 21:12:46 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.602 --rc genhtml_branch_coverage=1 00:06:56.602 --rc genhtml_function_coverage=1 00:06:56.602 --rc genhtml_legend=1 00:06:56.602 --rc geninfo_all_blocks=1 00:06:56.602 --rc geninfo_unexecuted_blocks=1 00:06:56.602 00:06:56.602 ' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.602 --rc genhtml_branch_coverage=1 00:06:56.602 --rc genhtml_function_coverage=1 00:06:56.602 --rc genhtml_legend=1 00:06:56.602 --rc geninfo_all_blocks=1 00:06:56.602 --rc geninfo_unexecuted_blocks=1 00:06:56.602 00:06:56.602 ' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.602 --rc genhtml_branch_coverage=1 00:06:56.602 --rc genhtml_function_coverage=1 00:06:56.602 --rc genhtml_legend=1 00:06:56.602 --rc geninfo_all_blocks=1 00:06:56.602 --rc geninfo_unexecuted_blocks=1 00:06:56.602 00:06:56.602 ' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.602 --rc genhtml_branch_coverage=1 00:06:56.602 --rc genhtml_function_coverage=1 00:06:56.602 --rc genhtml_legend=1 00:06:56.602 --rc geninfo_all_blocks=1 00:06:56.602 --rc geninfo_unexecuted_blocks=1 00:06:56.602 00:06:56.602 ' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74157 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74157 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74157 ']' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.602 21:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:56.602 21:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.602 [2024-12-16 21:12:46.269874] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:56.602 [2024-12-16 21:12:46.270318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74157 ] 00:06:56.862 [2024-12-16 21:12:46.415925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.862 [2024-12-16 21:12:46.436244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.434 21:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.434 21:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:57.435 21:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:57.435 21:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:57.435 21:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:57.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.957 Waiting for block devices as requested 00:06:57.957 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.957 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.218 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.218 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.495 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:03.495 BYT; 00:07:03.495 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:03.495 BYT; 00:07:03.495 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:03.495 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:03.496 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.496 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.496 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.496 21:12:52 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.496 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.496 21:12:52 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:04.430 The operation has completed successfully. 00:07:04.430 21:12:53 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:05.366 The operation has completed successfully. 00:07:05.366 21:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:05.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.194 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.194 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.194 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.454 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.454 21:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:06.454 21:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.454 21:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.454 [] 00:07:06.454 21:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.454 21:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:06.454 21:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:06.454 21:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:06.454 21:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:06.454 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:06.454 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.454 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:06.714 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.714 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.976 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.976 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:06.976 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:06.977 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a2d78ce0-5b0e-490d-8970-5b995d0d878a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a2d78ce0-5b0e-490d-8970-5b995d0d878a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7f92f1ec-0bce-4394-9211-c31ea3aa3d55"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f92f1ec-0bce-4394-9211-c31ea3aa3d55",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fc255348-bbbc-407b-887c-c810ccacb3be"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fc255348-bbbc-407b-887c-c810ccacb3be",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "14143638-e5eb-4f34-a471-a1ba2f05db81"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "14143638-e5eb-4f34-a471-a1ba2f05db81",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5e9a31e5-5d14-4eec-b7cf-5a73348d460d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5e9a31e5-5d14-4eec-b7cf-5a73348d460d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:06.977 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:06.977 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:06.977 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:06.977 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 74157 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74157 ']' 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74157 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74157 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.977 killing process with pid 74157 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74157' 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74157 00:07:06.977 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74157 00:07:07.237 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:07.237 21:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:07.237 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:07.237 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.237 21:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.237 ************************************ 00:07:07.237 START TEST bdev_hello_world 00:07:07.237 ************************************ 00:07:07.237 21:12:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:07.237 [2024-12-16 21:12:56.837782] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:07.237 [2024-12-16 21:12:56.837901] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74763 ] 00:07:07.498 [2024-12-16 21:12:56.981230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.498 [2024-12-16 21:12:57.011739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.760 [2024-12-16 21:12:57.415848] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:07.760 [2024-12-16 21:12:57.415923] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:07.760 [2024-12-16 21:12:57.415950] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:07.760 [2024-12-16 21:12:57.418340] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:07.760 [2024-12-16 21:12:57.419446] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:07.760 [2024-12-16 21:12:57.419517] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:07.760 [2024-12-16 21:12:57.420189] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:07.760 00:07:07.760 [2024-12-16 21:12:57.420230] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:08.021 00:07:08.021 real 0m0.824s 00:07:08.021 user 0m0.520s 00:07:08.021 sys 0m0.199s 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.021 ************************************ 00:07:08.021 END TEST bdev_hello_world 00:07:08.021 ************************************ 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:08.021 21:12:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:08.021 21:12:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.021 21:12:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.021 21:12:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.021 ************************************ 00:07:08.021 START TEST bdev_bounds 00:07:08.021 ************************************ 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74794 00:07:08.021 Process bdevio pid: 74794 00:07:08.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74794' 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74794 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74794 ']' 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.021 21:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:08.282 [2024-12-16 21:12:57.740014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:08.282 [2024-12-16 21:12:57.740159] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74794 ] 00:07:08.282 [2024-12-16 21:12:57.893882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.282 [2024-12-16 21:12:57.927776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.282 [2024-12-16 21:12:57.928410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.282 [2024-12-16 21:12:57.928457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.223 21:12:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.223 21:12:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:09.223 21:12:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:09.223 I/O targets: 00:07:09.223 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:09.223 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:09.223 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:09.223 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.223 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.223 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.223 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:09.223 00:07:09.223 00:07:09.223 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.223 http://cunit.sourceforge.net/ 00:07:09.223 00:07:09.223 00:07:09.223 Suite: bdevio tests on: Nvme3n1 00:07:09.223 Test: blockdev write read block ...passed 00:07:09.223 Test: blockdev write zeroes read block ...passed 00:07:09.223 Test: blockdev write zeroes read no split ...passed 00:07:09.223 Test: blockdev write zeroes read split ...passed 00:07:09.223 Test: blockdev write zeroes read split partial ...passed 00:07:09.223 Test: blockdev reset ...[2024-12-16 21:12:58.735292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:09.223 passed 00:07:09.223 Test: blockdev write read 8 blocks ...[2024-12-16 21:12:58.739381] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:09.223 passed 00:07:09.223 Test: blockdev write read size > 128k ...passed 00:07:09.223 Test: blockdev write read invalid size ...passed 00:07:09.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.223 Test: blockdev write read max offset ...passed 00:07:09.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.223 Test: blockdev writev readv 8 blocks ...passed 00:07:09.223 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.223 Test: blockdev writev readv block ...passed 00:07:09.223 Test: blockdev writev readv size > 128k ...passed 00:07:09.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.223 Test: blockdev comparev and writev ...[2024-12-16 21:12:58.758455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3e0e000 len:0x1000 00:07:09.223 [2024-12-16 21:12:58.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.223 passed 00:07:09.223 Test: blockdev nvme passthru rw ...passed 00:07:09.223 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:58.761118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.223 passed 00:07:09.224 Test: blockdev nvme admin passthru ...[2024-12-16 21:12:58.761184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.224 passed 00:07:09.224 Test: blockdev copy ...passed 00:07:09.224 Suite: bdevio tests on: Nvme2n3 00:07:09.224 Test: blockdev write read block ...passed 00:07:09.224 Test: blockdev write zeroes read block ...passed 00:07:09.224 Test: blockdev write zeroes read no split ...passed 00:07:09.224 Test: blockdev write zeroes read split ...passed 00:07:09.224 Test: blockdev write zeroes read split partial ...passed 00:07:09.224 Test: blockdev reset ...[2024-12-16 21:12:58.788718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:09.224 [2024-12-16 21:12:58.791486] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:09.224 passed 00:07:09.224 Test: blockdev write read 8 blocks ...passed 00:07:09.224 Test: blockdev write read size > 128k ...passed 00:07:09.224 Test: blockdev write read invalid size ...passed 00:07:09.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.224 Test: blockdev write read max offset ...passed 00:07:09.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.224 Test: blockdev writev readv 8 blocks ...passed 00:07:09.224 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.224 Test: blockdev writev readv block ...passed 00:07:09.224 Test: blockdev writev readv size > 128k ...passed 00:07:09.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.224 Test: blockdev comparev and writev ...[2024-12-16 21:12:58.799248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3e08000 len:0x1000 00:07:09.224 [2024-12-16 21:12:58.799311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.224 passed 00:07:09.224 Test: blockdev nvme passthru rw ...passed 00:07:09.224 Test: blockdev nvme passthru vendor specific ...[2024-12-16 21:12:58.801069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.224 passed 00:07:09.224 Test: blockdev nvme admin passthru ...[2024-12-16 21:12:58.801134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.224 passed 00:07:09.224 Test: blockdev copy ...passed 00:07:09.224 Suite: bdevio tests on: Nvme2n2 00:07:09.224 Test: blockdev write read block ...passed 00:07:09.224 Test: blockdev write zeroes read block ...passed 00:07:09.224 Test: blockdev write zeroes read no split ...passed 00:07:09.224 Test: blockdev write zeroes read split ...passed 00:07:09.224 Test: blockdev write zeroes read split partial ...passed 00:07:09.224 Test: blockdev reset ...[2024-12-16 21:12:58.829680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:09.224 [2024-12-16 21:12:58.834345] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:09.224 Test: blockdev write read 8 blocks ...uccessful. 00:07:09.224 passed 00:07:09.224 Test: blockdev write read size > 128k ...passed 00:07:09.224 Test: blockdev write read invalid size ...passed 00:07:09.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.224 Test: blockdev write read max offset ...passed 00:07:09.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.224 Test: blockdev writev readv 8 blocks ...passed 00:07:09.224 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.224 Test: blockdev writev readv block ...passed 00:07:09.224 Test: blockdev writev readv size > 128k ...passed 00:07:09.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.224 Test: blockdev comparev and writev ...[2024-12-16 21:12:58.853354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3e02000 len:0x1000 00:07:09.224 [2024-12-16 21:12:58.853572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.224 passed 00:07:09.224 Test: blockdev nvme passthru rw ...passed 00:07:09.224 Test: blockdev nvme passthru vendor specific ...passed 00:07:09.224 Test: blockdev nvme admin passthru ...[2024-12-16 21:12:58.856288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.224 [2024-12-16 21:12:58.856339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.224 passed 00:07:09.224 Test: blockdev copy ...passed 00:07:09.224 Suite: bdevio tests on: Nvme2n1 00:07:09.224 Test: blockdev write read block ...passed 00:07:09.224 Test: blockdev write zeroes read block ...passed 00:07:09.224 Test: blockdev write zeroes read no split ...passed 00:07:09.224 Test: blockdev write zeroes read split ...passed 00:07:09.224 Test: blockdev write zeroes read split partial ...passed 00:07:09.224 Test: blockdev reset ...[2024-12-16 21:12:58.884110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:09.224 [2024-12-16 21:12:58.888329] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:09.224 Test: blockdev write read 8 blocks ...uccessful. 00:07:09.224 passed 00:07:09.224 Test: blockdev write read size > 128k ...passed 00:07:09.224 Test: blockdev write read invalid size ...passed 00:07:09.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.224 Test: blockdev write read max offset ...passed 00:07:09.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.224 Test: blockdev writev readv 8 blocks ...passed 00:07:09.224 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.224 Test: blockdev writev readv block ...passed 00:07:09.224 Test: blockdev writev readv size > 128k ...passed 00:07:09.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.224 Test: blockdev comparev and writev ...[2024-12-16 21:12:58.897686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4204000 len:0x1000 00:07:09.224 [2024-12-16 21:12:58.897731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.224 passed 00:07:09.224 Test: blockdev nvme passthru rw ...passed 00:07:09.224 Test: blockdev nvme passthru vendor specific ...passed 00:07:09.224 Test: blockdev nvme admin passthru ...[2024-12-16 21:12:58.898653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.224 [2024-12-16 21:12:58.898684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.224 passed 00:07:09.224 Test: blockdev copy ...passed 00:07:09.224 Suite: bdevio tests on: Nvme1n1p2 00:07:09.224 Test: blockdev write read block ...passed 00:07:09.224 Test: blockdev write zeroes read block ...passed 00:07:09.224 Test: blockdev write zeroes read no split ...passed 00:07:09.224 Test: blockdev write zeroes read split ...passed 00:07:09.224 Test: blockdev write zeroes read split partial ...passed 00:07:09.487 Test: blockdev reset ...[2024-12-16 21:12:58.924370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:09.487 [2024-12-16 21:12:58.927726] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:09.487 passed 00:07:09.487 Test: blockdev write read 8 blocks ...passed 00:07:09.487 Test: blockdev write read size > 128k ...passed 00:07:09.487 Test: blockdev write read invalid size ...passed 00:07:09.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.487 Test: blockdev write read max offset ...passed 00:07:09.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.487 Test: blockdev writev readv 8 blocks ...passed 00:07:09.487 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.487 Test: blockdev writev readv block ...passed 00:07:09.487 Test: blockdev writev readv size > 128k ...passed 00:07:09.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.487 Test: blockdev comparev and writev ...[2024-12-16 21:12:58.938043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d123d000 len:0x1000 00:07:09.487 [2024-12-16 21:12:58.938085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.487 passed 00:07:09.487 Test: blockdev nvme passthru rw ...passed 00:07:09.487 Test: blockdev nvme passthru vendor specific ...passed 00:07:09.487 Test: blockdev nvme admin passthru ...passed 00:07:09.487 Test: blockdev copy ...passed 00:07:09.487 Suite: bdevio tests on: Nvme1n1p1 00:07:09.487 Test: blockdev write read block ...passed 00:07:09.487 Test: blockdev write zeroes read block ...passed 00:07:09.487 Test: blockdev write zeroes read no split ...passed 00:07:09.487 Test: blockdev write zeroes read split ...passed 00:07:09.487 Test: blockdev write zeroes read split partial ...passed 00:07:09.487 Test: blockdev reset ...[2024-12-16 21:12:58.951071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:09.487 passed 00:07:09.487 Test: blockdev write read 8 blocks ...[2024-12-16 21:12:58.952611] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:09.487 passed 00:07:09.487 Test: blockdev write read size > 128k ...passed 00:07:09.487 Test: blockdev write read invalid size ...passed 00:07:09.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.488 Test: blockdev write read max offset ...passed 00:07:09.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.488 Test: blockdev writev readv 8 blocks ...passed 00:07:09.488 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.488 Test: blockdev writev readv block ...passed 00:07:09.488 Test: blockdev writev readv size > 128k ...passed 00:07:09.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.488 Test: blockdev comparev and writev ...[2024-12-16 21:12:58.958493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d1239000 len:0x1000 00:07:09.488 [2024-12-16 21:12:58.958533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.488 passed 00:07:09.488 Test: blockdev nvme passthru rw ...passed 00:07:09.488 Test: blockdev nvme passthru vendor specific ...passed 00:07:09.488 Test: blockdev nvme admin passthru ...passed 00:07:09.488 Test: blockdev copy ...passed 00:07:09.488 Suite: bdevio tests on: Nvme0n1 00:07:09.488 Test: blockdev write read block ...passed 00:07:09.488 Test: blockdev write zeroes read block ...passed 00:07:09.488 Test: blockdev write zeroes read no split ...passed 00:07:09.488 Test: blockdev write zeroes read split ...passed 00:07:09.488 Test: blockdev write zeroes read split partial ...passed 00:07:09.488 Test: blockdev reset ...[2024-12-16 21:12:58.969557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:09.488 passed 00:07:09.488 Test: blockdev write read 8 blocks ...[2024-12-16 21:12:58.971127] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:09.488 passed 00:07:09.488 Test: blockdev write read size > 128k ...passed 00:07:09.488 Test: blockdev write read invalid size ...passed 00:07:09.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.488 Test: blockdev write read max offset ...passed 00:07:09.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.488 Test: blockdev writev readv 8 blocks ...passed 00:07:09.488 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.488 Test: blockdev writev readv block ...passed 00:07:09.488 Test: blockdev writev readv size > 128k ...passed 00:07:09.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.488 Test: blockdev comparev and writev ...passed 00:07:09.488 Test: blockdev nvme passthru rw ...[2024-12-16 21:12:58.978650] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:09.488 separate metadata which is not supported yet. 00:07:09.488 passed 00:07:09.488 Test: blockdev nvme passthru vendor specific ...passed 00:07:09.488 Test: blockdev nvme admin passthru ...[2024-12-16 21:12:58.980404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:09.488 [2024-12-16 21:12:58.980460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:09.488 passed 00:07:09.488 Test: blockdev copy ...passed 00:07:09.488 00:07:09.488 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.488 suites 7 7 n/a 0 0 00:07:09.488 tests 161 161 161 0 0 00:07:09.488 asserts 1025 1025 1025 0 n/a 00:07:09.488 00:07:09.488 Elapsed time = 0.631 seconds 00:07:09.488 0 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74794 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74794 ']' 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74794 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74794 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74794' 00:07:09.488 killing process with pid 74794 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74794 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74794 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:09.488 00:07:09.488 real 0m1.501s 00:07:09.488 user 0m3.778s 00:07:09.488 sys 0m0.307s 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.488 21:12:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.488 ************************************ 00:07:09.488 END TEST bdev_bounds 00:07:09.488 ************************************ 00:07:09.750 21:12:59 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.750 21:12:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.750 21:12:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.750 21:12:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.750 ************************************ 00:07:09.750 START TEST bdev_nbd 00:07:09.750 ************************************ 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:09.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74848 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74848 /var/tmp/spdk-nbd.sock 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74848 ']' 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.750 21:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.750 [2024-12-16 21:12:59.337395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:09.750 [2024-12-16 21:12:59.337857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.013 [2024-12-16 21:12:59.495855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.013 [2024-12-16 21:12:59.525234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:10.585 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.847 1+0 records in 00:07:10.847 1+0 records out 00:07:10.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000949488 s, 4.3 MB/s 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:10.847 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.108 1+0 records in 00:07:11.108 1+0 records out 00:07:11.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766993 s, 5.3 MB/s 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.108 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.368 1+0 records in 00:07:11.368 1+0 records out 00:07:11.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119647 s, 3.4 MB/s 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.368 21:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.368 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.368 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.368 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.628 1+0 records in 00:07:11.628 1+0 records out 00:07:11.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110736 s, 3.7 MB/s 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.628 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.890 1+0 records in 00:07:11.890 1+0 records out 00:07:11.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122324 s, 3.3 MB/s 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.890 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.152 1+0 records in 00:07:12.152 1+0 records out 00:07:12.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838771 s, 4.9 MB/s 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.152 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.411 1+0 records in 00:07:12.411 1+0 records out 00:07:12.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633733 s, 6.5 MB/s 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.411 21:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.411 21:13:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.411 21:13:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.411 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.411 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.411 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.672 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:12.672 { 00:07:12.672 "nbd_device": "/dev/nbd0", 00:07:12.672 "bdev_name": "Nvme0n1" 00:07:12.672 }, 00:07:12.672 { 00:07:12.672 "nbd_device": "/dev/nbd1", 00:07:12.672 "bdev_name": "Nvme1n1p1" 00:07:12.672 }, 00:07:12.672 { 00:07:12.672 "nbd_device": "/dev/nbd2", 00:07:12.672 "bdev_name": "Nvme1n1p2" 00:07:12.672 }, 00:07:12.672 { 00:07:12.672 "nbd_device": "/dev/nbd3", 00:07:12.672 "bdev_name": "Nvme2n1" 00:07:12.672 }, 00:07:12.672 { 00:07:12.672 "nbd_device": "/dev/nbd4", 00:07:12.672 "bdev_name": "Nvme2n2" 00:07:12.672 }, 00:07:12.672 { 00:07:12.672 "nbd_device": "/dev/nbd5", 00:07:12.672 "bdev_name": "Nvme2n3" 00:07:12.672 }, 00:07:12.672 { 00:07:12.672 "nbd_device": "/dev/nbd6", 00:07:12.672 "bdev_name": "Nvme3n1" 00:07:12.672 } 00:07:12.672 ]' 00:07:12.672 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:12.672 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:12.672 { 00:07:12.673 "nbd_device": "/dev/nbd0", 00:07:12.673 "bdev_name": "Nvme0n1" 00:07:12.673 }, 00:07:12.673 { 00:07:12.673 "nbd_device": "/dev/nbd1", 00:07:12.673 "bdev_name": "Nvme1n1p1" 00:07:12.673 }, 00:07:12.673 { 00:07:12.673 "nbd_device": "/dev/nbd2", 00:07:12.673 "bdev_name": "Nvme1n1p2" 00:07:12.673 }, 00:07:12.673 { 00:07:12.673 "nbd_device": "/dev/nbd3", 00:07:12.673 "bdev_name": "Nvme2n1" 00:07:12.673 }, 00:07:12.673 { 00:07:12.673 "nbd_device": "/dev/nbd4", 00:07:12.673 "bdev_name": "Nvme2n2" 00:07:12.673 }, 00:07:12.673 { 00:07:12.673 "nbd_device": "/dev/nbd5", 00:07:12.673 "bdev_name": "Nvme2n3" 00:07:12.673 }, 00:07:12.673 { 00:07:12.673 "nbd_device": "/dev/nbd6", 00:07:12.673 "bdev_name": "Nvme3n1" 00:07:12.673 } 00:07:12.673 ]' 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.673 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.935 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.197 21:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.458 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.459 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.459 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.720 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.979 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.980 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.238 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:14.239 21:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:14.497 /dev/nbd0 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.497 1+0 records in 00:07:14.497 1+0 records out 00:07:14.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483503 s, 8.5 MB/s 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:14.497 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:14.757 /dev/nbd1 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.757 1+0 records in 00:07:14.757 1+0 records out 00:07:14.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793447 s, 5.2 MB/s 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:14.757 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:15.019 /dev/nbd10 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.019 1+0 records in 00:07:15.019 1+0 records out 00:07:15.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099379 s, 4.1 MB/s 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.019 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:15.278 /dev/nbd11 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.278 1+0 records in 00:07:15.278 1+0 records out 00:07:15.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101943 s, 4.0 MB/s 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.278 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:15.278 /dev/nbd12 00:07:15.537 21:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.537 1+0 records in 00:07:15.537 1+0 records out 00:07:15.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106153 s, 3.9 MB/s 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.537 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:15.537 /dev/nbd13 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.538 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.538 1+0 records in 00:07:15.538 1+0 records out 00:07:15.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114422 s, 3.6 MB/s 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:15.796 /dev/nbd14 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.796 1+0 records in 00:07:15.796 1+0 records out 00:07:15.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120474 s, 3.4 MB/s 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.796 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd0", 00:07:16.056 "bdev_name": "Nvme0n1" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd1", 00:07:16.056 "bdev_name": "Nvme1n1p1" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd10", 00:07:16.056 "bdev_name": "Nvme1n1p2" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd11", 00:07:16.056 "bdev_name": "Nvme2n1" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd12", 00:07:16.056 "bdev_name": "Nvme2n2" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd13", 00:07:16.056 "bdev_name": "Nvme2n3" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd14", 00:07:16.056 "bdev_name": "Nvme3n1" 00:07:16.056 } 00:07:16.056 ]' 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd0", 00:07:16.056 "bdev_name": "Nvme0n1" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd1", 00:07:16.056 "bdev_name": "Nvme1n1p1" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd10", 00:07:16.056 "bdev_name": "Nvme1n1p2" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd11", 00:07:16.056 "bdev_name": "Nvme2n1" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd12", 00:07:16.056 "bdev_name": "Nvme2n2" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd13", 00:07:16.056 "bdev_name": "Nvme2n3" 00:07:16.056 }, 00:07:16.056 { 00:07:16.056 "nbd_device": "/dev/nbd14", 00:07:16.056 "bdev_name": "Nvme3n1" 00:07:16.056 } 00:07:16.056 ]' 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.056 /dev/nbd1 00:07:16.056 /dev/nbd10 00:07:16.056 /dev/nbd11 00:07:16.056 /dev/nbd12 00:07:16.056 /dev/nbd13 00:07:16.056 /dev/nbd14' 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.056 /dev/nbd1 00:07:16.056 /dev/nbd10 00:07:16.056 /dev/nbd11 00:07:16.056 /dev/nbd12 00:07:16.056 /dev/nbd13 00:07:16.056 /dev/nbd14' 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.056 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:16.057 256+0 records in 00:07:16.057 256+0 records out 00:07:16.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678132 s, 155 MB/s 00:07:16.057 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.057 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.317 256+0 records in 00:07:16.317 256+0 records out 00:07:16.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175599 s, 6.0 MB/s 00:07:16.317 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.317 21:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.579 256+0 records in 00:07:16.579 256+0 records out 00:07:16.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214855 s, 4.9 MB/s 00:07:16.579 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.579 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:16.841 256+0 records in 00:07:16.841 256+0 records out 00:07:16.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22536 s, 4.7 MB/s 00:07:16.841 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.841 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:16.841 256+0 records in 00:07:16.841 256+0 records out 00:07:16.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19645 s, 5.3 MB/s 00:07:16.841 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.841 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:17.102 256+0 records in 00:07:17.102 256+0 records out 00:07:17.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218224 s, 4.8 MB/s 00:07:17.102 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.102 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:17.363 256+0 records in 00:07:17.363 256+0 records out 00:07:17.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22913 s, 4.6 MB/s 00:07:17.363 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.363 21:13:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:17.623 256+0 records in 00:07:17.623 256+0 records out 00:07:17.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223507 s, 4.7 MB/s 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.623 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.624 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.885 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.146 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.408 21:13:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.668 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.926 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.185 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.443 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.443 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.443 21:13:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:19.443 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:19.701 malloc_lvol_verify 00:07:19.701 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:19.960 1e9fdd85-52f7-4f6c-933e-5bfc1213a8b5 00:07:19.960 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:19.960 4454f4ac-fca9-4213-af7c-11f4b81ed3d5 00:07:19.960 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:20.253 /dev/nbd0 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:20.253 mke2fs 1.47.0 (5-Feb-2023) 00:07:20.253 Discarding device blocks: 0/4096 done 00:07:20.253 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:20.253 00:07:20.253 Allocating group tables: 0/1 done 00:07:20.253 Writing inode tables: 0/1 done 00:07:20.253 Creating journal (1024 blocks): done 00:07:20.253 Writing superblocks and filesystem accounting information: 0/1 done 00:07:20.253 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.253 21:13:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74848 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74848 ']' 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74848 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74848 00:07:20.530 killing process with pid 74848 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74848' 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74848 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74848 00:07:20.530 ************************************ 00:07:20.530 END TEST bdev_nbd 00:07:20.530 ************************************ 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:20.530 00:07:20.530 real 0m10.977s 00:07:20.530 user 0m15.209s 00:07:20.530 sys 0m3.874s 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.530 21:13:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 21:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:20.789 skipping fio tests on NVMe due to multi-ns failures. 00:07:20.789 21:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:20.789 21:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:20.789 21:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:20.789 21:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:20.789 21:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:20.789 21:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:20.789 21:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.789 21:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 ************************************ 00:07:20.789 START TEST bdev_verify 00:07:20.789 ************************************ 00:07:20.789 21:13:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:20.789 [2024-12-16 21:13:10.343142] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:20.789 [2024-12-16 21:13:10.343246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75258 ] 00:07:20.789 [2024-12-16 21:13:10.483771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.047 [2024-12-16 21:13:10.509212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.047 [2024-12-16 21:13:10.509275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.305 Running I/O for 5 seconds... 00:07:23.616 23552.00 IOPS, 92.00 MiB/s [2024-12-16T21:13:14.249Z] 24352.00 IOPS, 95.12 MiB/s [2024-12-16T21:13:15.182Z] 25002.67 IOPS, 97.67 MiB/s [2024-12-16T21:13:16.114Z] 26016.00 IOPS, 101.62 MiB/s [2024-12-16T21:13:16.114Z] 26444.80 IOPS, 103.30 MiB/s 00:07:26.414 Latency(us) 00:07:26.414 [2024-12-16T21:13:16.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.414 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x0 length 0xbd0bd 00:07:26.414 Nvme0n1 : 5.05 1824.77 7.13 0.00 0.00 69915.59 12502.25 68157.44 00:07:26.414 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:26.414 Nvme0n1 : 5.05 1900.47 7.42 0.00 0.00 67101.82 13006.38 70577.23 00:07:26.414 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x0 length 0x4ff80 00:07:26.414 Nvme1n1p1 : 5.05 1823.87 7.12 0.00 0.00 69833.75 13812.97 64124.46 00:07:26.414 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:26.414 Nvme1n1p1 : 5.05 1899.88 7.42 0.00 0.00 66993.80 14216.27 64124.46 00:07:26.414 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x0 length 0x4ff7f 00:07:26.414 Nvme1n1p2 : 5.06 1822.76 7.12 0.00 0.00 69751.12 14922.04 63317.86 00:07:26.414 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:26.414 Nvme1n1p2 : 5.06 1898.61 7.42 0.00 0.00 66893.35 15022.87 62511.26 00:07:26.414 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x0 length 0x80000 00:07:26.414 Nvme2n1 : 5.07 1829.76 7.15 0.00 0.00 69437.22 4663.14 61301.37 00:07:26.414 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x80000 length 0x80000 00:07:26.414 Nvme2n1 : 5.08 1903.57 7.44 0.00 0.00 66636.53 3654.89 59284.87 00:07:26.414 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x0 length 0x80000 00:07:26.414 Nvme2n2 : 5.07 1828.90 7.14 0.00 0.00 69346.46 5671.38 64527.75 00:07:26.414 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x80000 length 0x80000 00:07:26.414 Nvme2n2 : 5.09 1912.35 7.47 0.00 0.00 66311.84 7309.78 59688.17 00:07:26.414 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x0 length 0x80000 00:07:26.414 Nvme2n3 : 5.08 1828.23 7.14 0.00 0.00 69251.10 6024.27 66947.54 00:07:26.414 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x80000 length 0x80000 00:07:26.414 Nvme2n3 : 5.09 1911.86 7.47 0.00 0.00 66209.84 7662.67 62511.26 00:07:26.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x0 length 0x20000 00:07:26.414 Nvme3n1 : 5.08 1837.64 7.18 0.00 0.00 68870.13 5444.53 70173.93 00:07:26.414 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.414 Verification LBA range: start 0x20000 length 0x20000 00:07:26.414 Nvme3n1 : 5.09 1911.35 7.47 0.00 0.00 66131.07 7763.50 62914.56 00:07:26.414 [2024-12-16T21:13:16.114Z] =================================================================================================================== 00:07:26.414 [2024-12-16T21:13:16.114Z] Total : 26134.03 102.09 0.00 0.00 68016.34 3654.89 70577.23 00:07:27.356 00:07:27.356 real 0m6.521s 00:07:27.356 user 0m12.412s 00:07:27.356 sys 0m0.175s 00:07:27.356 ************************************ 00:07:27.356 END TEST bdev_verify 00:07:27.356 ************************************ 00:07:27.356 21:13:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.356 21:13:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 21:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:27.356 21:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:27.356 21:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.356 21:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.356 ************************************ 00:07:27.356 START TEST bdev_verify_big_io 00:07:27.356 ************************************ 00:07:27.356 21:13:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:27.356 [2024-12-16 21:13:16.924588] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:27.356 [2024-12-16 21:13:16.924707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75351 ] 00:07:27.616 [2024-12-16 21:13:17.071581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.616 [2024-12-16 21:13:17.091440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.616 [2024-12-16 21:13:17.091472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.877 Running I/O for 5 seconds... 00:07:33.714 979.00 IOPS, 61.19 MiB/s [2024-12-16T21:13:23.980Z] 2508.50 IOPS, 156.78 MiB/s [2024-12-16T21:13:23.980Z] 2971.00 IOPS, 185.69 MiB/s 00:07:34.280 Latency(us) 00:07:34.280 [2024-12-16T21:13:23.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.280 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.280 Verification LBA range: start 0x0 length 0xbd0b 00:07:34.280 Nvme0n1 : 5.91 102.31 6.39 0.00 0.00 1184916.04 17745.13 1400252.26 00:07:34.280 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.280 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:34.280 Nvme0n1 : 5.81 97.62 6.10 0.00 0.00 1238386.96 16938.54 1845493.76 00:07:34.280 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.280 Verification LBA range: start 0x0 length 0x4ff8 00:07:34.280 Nvme1n1p1 : 5.80 100.43 6.28 0.00 0.00 1180376.08 55251.89 1290555.08 00:07:34.280 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.280 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:34.280 Nvme1n1p1 : 5.81 100.16 6.26 0.00 0.00 1175804.02 41741.39 1858399.31 00:07:34.280 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.280 Verification LBA range: start 0x0 length 0x4ff7 00:07:34.280 Nvme1n1p2 : 5.92 90.94 5.68 0.00 0.00 1262031.83 113730.17 1948738.17 00:07:34.280 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:34.281 Nvme1n1p2 : 6.01 104.24 6.52 0.00 0.00 1080746.78 64124.46 1897115.96 00:07:34.281 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x0 length 0x8000 00:07:34.281 Nvme2n1 : 6.00 111.74 6.98 0.00 0.00 998701.82 77836.60 1109877.37 00:07:34.281 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x8000 length 0x8000 00:07:34.281 Nvme2n1 : 6.01 109.21 6.83 0.00 0.00 1006134.68 83886.08 1922927.06 00:07:34.281 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x0 length 0x8000 00:07:34.281 Nvme2n2 : 6.04 116.56 7.29 0.00 0.00 926009.93 41539.74 1058255.16 00:07:34.281 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x8000 length 0x8000 00:07:34.281 Nvme2n2 : 6.18 123.84 7.74 0.00 0.00 853294.93 57268.38 1084066.26 00:07:34.281 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x0 length 0x8000 00:07:34.281 Nvme2n3 : 6.11 125.62 7.85 0.00 0.00 831099.93 40934.79 1084066.26 00:07:34.281 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x8000 length 0x8000 00:07:34.281 Nvme2n3 : 6.25 129.98 8.12 0.00 0.00 787548.15 45572.73 2013265.92 00:07:34.281 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x0 length 0x2000 00:07:34.281 Nvme3n1 : 6.25 159.80 9.99 0.00 0.00 634127.69 617.55 1116330.14 00:07:34.281 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:34.281 Verification LBA range: start 0x2000 length 0x2000 00:07:34.281 Nvme3n1 : 6.33 194.41 12.15 0.00 0.00 513230.33 253.64 1509949.44 00:07:34.281 [2024-12-16T21:13:23.981Z] =================================================================================================================== 00:07:34.281 [2024-12-16T21:13:23.981Z] Total : 1666.86 104.18 0.00 0.00 922613.06 253.64 2013265.92 00:07:34.848 00:07:34.848 real 0m7.616s 00:07:34.848 user 0m14.541s 00:07:34.848 sys 0m0.213s 00:07:34.848 ************************************ 00:07:34.848 END TEST bdev_verify_big_io 00:07:34.848 ************************************ 00:07:34.848 21:13:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.848 21:13:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:34.848 21:13:24 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.848 21:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:34.848 21:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.848 21:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.848 ************************************ 00:07:34.848 START TEST bdev_write_zeroes 00:07:34.848 ************************************ 00:07:34.848 21:13:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.165 [2024-12-16 21:13:24.594703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:35.165 [2024-12-16 21:13:24.594821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75449 ] 00:07:35.165 [2024-12-16 21:13:24.735156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.165 [2024-12-16 21:13:24.751816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.733 Running I/O for 1 seconds... 00:07:36.668 65856.00 IOPS, 257.25 MiB/s 00:07:36.668 Latency(us) 00:07:36.668 [2024-12-16T21:13:26.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.668 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.668 Nvme0n1 : 1.02 9375.25 36.62 0.00 0.00 13621.45 7158.55 25407.80 00:07:36.668 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.668 Nvme1n1p1 : 1.03 9363.78 36.58 0.00 0.00 13623.89 8469.27 25206.15 00:07:36.668 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.668 Nvme1n1p2 : 1.03 9352.40 36.53 0.00 0.00 13615.99 8519.68 24197.91 00:07:36.668 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.668 Nvme2n1 : 1.03 9341.89 36.49 0.00 0.00 13611.13 8570.09 23794.61 00:07:36.668 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.668 Nvme2n2 : 1.03 9330.55 36.45 0.00 0.00 13610.15 8670.92 23189.66 00:07:36.668 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.668 Nvme2n3 : 1.03 9320.14 36.41 0.00 0.00 13581.98 8922.98 23895.43 00:07:36.668 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.668 Nvme3n1 : 1.03 9309.77 36.37 0.00 0.00 13537.62 7410.61 25508.63 00:07:36.668 [2024-12-16T21:13:26.368Z] =================================================================================================================== 00:07:36.668 [2024-12-16T21:13:26.368Z] Total : 65393.78 255.44 0.00 0.00 13600.32 7158.55 25508.63 00:07:36.668 00:07:36.668 real 0m1.798s 00:07:36.668 user 0m1.541s 00:07:36.668 sys 0m0.149s 00:07:36.668 21:13:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.668 ************************************ 00:07:36.668 END TEST bdev_write_zeroes 00:07:36.668 ************************************ 00:07:36.668 21:13:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:36.930 21:13:26 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.930 21:13:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:36.931 21:13:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.931 21:13:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.931 ************************************ 00:07:36.931 START TEST bdev_json_nonenclosed 00:07:36.931 ************************************ 00:07:36.931 21:13:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.931 [2024-12-16 21:13:26.451274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:36.931 [2024-12-16 21:13:26.451387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75486 ] 00:07:36.931 [2024-12-16 21:13:26.596671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.931 [2024-12-16 21:13:26.616044] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.931 [2024-12-16 21:13:26.616120] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:36.931 [2024-12-16 21:13:26.616135] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:36.931 [2024-12-16 21:13:26.616146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.192 00:07:37.192 real 0m0.285s 00:07:37.192 user 0m0.103s 00:07:37.192 sys 0m0.078s 00:07:37.192 ************************************ 00:07:37.192 21:13:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.192 21:13:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:37.192 END TEST bdev_json_nonenclosed 00:07:37.192 ************************************ 00:07:37.192 21:13:26 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.192 21:13:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:37.192 21:13:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.192 21:13:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.192 ************************************ 00:07:37.192 START TEST bdev_json_nonarray 00:07:37.192 ************************************ 00:07:37.192 21:13:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.192 [2024-12-16 21:13:26.791903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:37.192 [2024-12-16 21:13:26.792009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75511 ] 00:07:37.453 [2024-12-16 21:13:26.934837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.453 [2024-12-16 21:13:26.958139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.453 [2024-12-16 21:13:26.958232] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:37.453 [2024-12-16 21:13:26.958252] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:37.453 [2024-12-16 21:13:26.958263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.453 00:07:37.453 real 0m0.295s 00:07:37.453 user 0m0.110s 00:07:37.453 sys 0m0.081s 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.453 ************************************ 00:07:37.453 END TEST bdev_json_nonarray 00:07:37.453 ************************************ 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:37.453 21:13:27 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:37.453 21:13:27 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:37.453 21:13:27 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:37.453 21:13:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.453 21:13:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.453 21:13:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.453 ************************************ 00:07:37.453 START TEST bdev_gpt_uuid 00:07:37.453 ************************************ 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75537 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75537 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75537 ']' 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.453 21:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-12-16 21:13:27.176794] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:37.715 [2024-12-16 21:13:27.176957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75537 ] 00:07:37.715 [2024-12-16 21:13:27.324440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.715 [2024-12-16 21:13:27.354464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.658 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.658 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:38.658 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.658 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.658 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.658 Some configs were skipped because the RPC state that can call them passed over. 00:07:38.658 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.658 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:38.919 { 00:07:38.919 "name": "Nvme1n1p1", 00:07:38.919 "aliases": [ 00:07:38.919 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:38.919 ], 00:07:38.919 "product_name": "GPT Disk", 00:07:38.919 "block_size": 4096, 00:07:38.919 "num_blocks": 655104, 00:07:38.919 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:38.919 "assigned_rate_limits": { 00:07:38.919 "rw_ios_per_sec": 0, 00:07:38.919 "rw_mbytes_per_sec": 0, 00:07:38.919 "r_mbytes_per_sec": 0, 00:07:38.919 "w_mbytes_per_sec": 0 00:07:38.919 }, 00:07:38.919 "claimed": false, 00:07:38.919 "zoned": false, 00:07:38.919 "supported_io_types": { 00:07:38.919 "read": true, 00:07:38.919 "write": true, 00:07:38.919 "unmap": true, 00:07:38.919 "flush": true, 00:07:38.919 "reset": true, 00:07:38.919 "nvme_admin": false, 00:07:38.919 "nvme_io": false, 00:07:38.919 "nvme_io_md": false, 00:07:38.919 "write_zeroes": true, 00:07:38.919 "zcopy": false, 00:07:38.919 "get_zone_info": false, 00:07:38.919 "zone_management": false, 00:07:38.919 "zone_append": false, 00:07:38.919 "compare": true, 00:07:38.919 "compare_and_write": false, 00:07:38.919 "abort": true, 00:07:38.919 "seek_hole": false, 00:07:38.919 "seek_data": false, 00:07:38.919 "copy": true, 00:07:38.919 "nvme_iov_md": false 00:07:38.919 }, 00:07:38.919 "driver_specific": { 00:07:38.919 "gpt": { 00:07:38.919 "base_bdev": "Nvme1n1", 00:07:38.919 "offset_blocks": 256, 00:07:38.919 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:38.919 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:38.919 "partition_name": "SPDK_TEST_first" 00:07:38.919 } 00:07:38.919 } 00:07:38.919 } 00:07:38.919 ]' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:38.919 { 00:07:38.919 "name": "Nvme1n1p2", 00:07:38.919 "aliases": [ 00:07:38.919 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:38.919 ], 00:07:38.919 "product_name": "GPT Disk", 00:07:38.919 "block_size": 4096, 00:07:38.919 "num_blocks": 655103, 00:07:38.919 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:38.919 "assigned_rate_limits": { 00:07:38.919 "rw_ios_per_sec": 0, 00:07:38.919 "rw_mbytes_per_sec": 0, 00:07:38.919 "r_mbytes_per_sec": 0, 00:07:38.919 "w_mbytes_per_sec": 0 00:07:38.919 }, 00:07:38.919 "claimed": false, 00:07:38.919 "zoned": false, 00:07:38.919 "supported_io_types": { 00:07:38.919 "read": true, 00:07:38.919 "write": true, 00:07:38.919 "unmap": true, 00:07:38.919 "flush": true, 00:07:38.919 "reset": true, 00:07:38.919 "nvme_admin": false, 00:07:38.919 "nvme_io": false, 00:07:38.919 "nvme_io_md": false, 00:07:38.919 "write_zeroes": true, 00:07:38.919 "zcopy": false, 00:07:38.919 "get_zone_info": false, 00:07:38.919 "zone_management": false, 00:07:38.919 "zone_append": false, 00:07:38.919 "compare": true, 00:07:38.919 "compare_and_write": false, 00:07:38.919 "abort": true, 00:07:38.919 "seek_hole": false, 00:07:38.919 "seek_data": false, 00:07:38.919 "copy": true, 00:07:38.919 "nvme_iov_md": false 00:07:38.919 }, 00:07:38.919 "driver_specific": { 00:07:38.919 "gpt": { 00:07:38.919 "base_bdev": "Nvme1n1", 00:07:38.919 "offset_blocks": 655360, 00:07:38.919 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:38.919 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:38.919 "partition_name": "SPDK_TEST_second" 00:07:38.919 } 00:07:38.919 } 00:07:38.919 } 00:07:38.919 ]' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 75537 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75537 ']' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75537 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.919 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75537 00:07:39.178 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.178 killing process with pid 75537 00:07:39.178 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.178 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75537' 00:07:39.178 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75537 00:07:39.178 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75537 00:07:39.437 00:07:39.437 real 0m1.828s 00:07:39.437 user 0m1.969s 00:07:39.437 sys 0m0.400s 00:07:39.437 ************************************ 00:07:39.437 END TEST bdev_gpt_uuid 00:07:39.437 ************************************ 00:07:39.437 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.437 21:13:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.437 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:39.437 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:39.437 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:39.437 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:39.438 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.438 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:39.438 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:39.438 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:39.438 21:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:39.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.959 Waiting for block devices as requested 00:07:39.959 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.959 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.959 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.221 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.510 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:45.510 21:13:34 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:45.510 21:13:34 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:45.510 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:45.510 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:45.510 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:45.510 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:45.510 21:13:35 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:45.510 00:07:45.510 real 0m49.090s 00:07:45.510 user 1m1.938s 00:07:45.510 sys 0m7.956s 00:07:45.510 ************************************ 00:07:45.510 END TEST blockdev_nvme_gpt 00:07:45.510 ************************************ 00:07:45.510 21:13:35 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.510 21:13:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.510 21:13:35 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:45.510 21:13:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.510 21:13:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.510 21:13:35 -- common/autotest_common.sh@10 -- # set +x 00:07:45.510 ************************************ 00:07:45.510 START TEST nvme 00:07:45.510 ************************************ 00:07:45.510 21:13:35 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:45.772 * Looking for test storage... 00:07:45.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.772 21:13:35 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.772 21:13:35 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.772 21:13:35 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.772 21:13:35 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.772 21:13:35 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.772 21:13:35 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.772 21:13:35 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.772 21:13:35 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:45.772 21:13:35 nvme -- scripts/common.sh@345 -- # : 1 00:07:45.772 21:13:35 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.772 21:13:35 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.772 21:13:35 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:45.772 21:13:35 nvme -- scripts/common.sh@353 -- # local d=1 00:07:45.772 21:13:35 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.772 21:13:35 nvme -- scripts/common.sh@355 -- # echo 1 00:07:45.772 21:13:35 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.772 21:13:35 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@353 -- # local d=2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.772 21:13:35 nvme -- scripts/common.sh@355 -- # echo 2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.772 21:13:35 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.772 21:13:35 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.772 21:13:35 nvme -- scripts/common.sh@368 -- # return 0 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 21:13:35 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 21:13:35 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:46.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.917 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.917 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.917 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.917 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.917 21:13:36 nvme -- nvme/nvme.sh@79 -- # uname 00:07:46.917 21:13:36 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:46.917 21:13:36 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:46.917 21:13:36 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1075 -- # stubpid=76163 00:07:46.917 Waiting for stub to ready for secondary processes... 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76163 ]] 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:46.917 21:13:36 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:46.917 [2024-12-16 21:13:36.536755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:46.917 [2024-12-16 21:13:36.536875] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:47.859 [2024-12-16 21:13:37.318209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.859 [2024-12-16 21:13:37.336421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.859 [2024-12-16 21:13:37.336798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.859 [2024-12-16 21:13:37.336921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.859 [2024-12-16 21:13:37.349164] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:47.859 [2024-12-16 21:13:37.349230] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.859 [2024-12-16 21:13:37.360999] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:47.859 [2024-12-16 21:13:37.361181] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:47.859 [2024-12-16 21:13:37.361690] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.859 [2024-12-16 21:13:37.361964] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:47.859 [2024-12-16 21:13:37.362209] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:47.859 [2024-12-16 21:13:37.362750] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.859 [2024-12-16 21:13:37.362963] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:47.859 [2024-12-16 21:13:37.363049] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:47.859 [2024-12-16 21:13:37.363818] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.859 [2024-12-16 21:13:37.364004] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:47.859 [2024-12-16 21:13:37.364096] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:47.859 [2024-12-16 21:13:37.364146] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:47.859 [2024-12-16 21:13:37.364214] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:47.859 done. 00:07:47.859 21:13:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:47.859 21:13:37 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:47.859 21:13:37 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:47.859 21:13:37 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:47.859 21:13:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.859 21:13:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.859 ************************************ 00:07:47.859 START TEST nvme_reset 00:07:47.859 ************************************ 00:07:47.859 21:13:37 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:48.117 Initializing NVMe Controllers 00:07:48.117 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:48.117 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:48.117 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:48.117 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:48.117 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:48.117 ************************************ 00:07:48.117 END TEST nvme_reset 00:07:48.117 ************************************ 00:07:48.117 00:07:48.117 real 0m0.182s 00:07:48.117 user 0m0.069s 00:07:48.117 sys 0m0.071s 00:07:48.117 21:13:37 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.117 21:13:37 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:48.117 21:13:37 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:48.117 21:13:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.117 21:13:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.117 21:13:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.117 ************************************ 00:07:48.117 START TEST nvme_identify 00:07:48.117 ************************************ 00:07:48.117 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:48.117 21:13:37 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:48.117 21:13:37 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:48.117 21:13:37 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:48.117 21:13:37 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:48.117 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:48.117 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:48.117 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:48.117 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:48.117 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:48.118 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:48.118 21:13:37 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:48.118 21:13:37 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:48.379 [2024-12-16 21:13:37.947164] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76184 terminated unexpected 00:07:48.379 ===================================================== 00:07:48.379 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.379 ===================================================== 00:07:48.379 Controller Capabilities/Features 00:07:48.379 ================================ 00:07:48.379 Vendor ID: 1b36 00:07:48.379 Subsystem Vendor ID: 1af4 00:07:48.379 Serial Number: 12340 00:07:48.379 Model Number: QEMU NVMe Ctrl 00:07:48.379 Firmware Version: 8.0.0 00:07:48.379 Recommended Arb Burst: 6 00:07:48.379 IEEE OUI Identifier: 00 54 52 00:07:48.379 Multi-path I/O 00:07:48.379 May have multiple subsystem ports: No 00:07:48.379 May have multiple controllers: No 00:07:48.379 Associated with SR-IOV VF: No 00:07:48.379 Max Data Transfer Size: 524288 00:07:48.379 Max Number of Namespaces: 256 00:07:48.379 Max Number of I/O Queues: 64 00:07:48.379 NVMe Specification Version (VS): 1.4 00:07:48.379 NVMe Specification Version (Identify): 1.4 00:07:48.379 Maximum Queue Entries: 2048 00:07:48.379 Contiguous Queues Required: Yes 00:07:48.379 Arbitration Mechanisms Supported 00:07:48.379 Weighted Round Robin: Not Supported 00:07:48.379 Vendor Specific: Not Supported 00:07:48.379 Reset Timeout: 7500 ms 00:07:48.379 Doorbell Stride: 4 bytes 00:07:48.379 NVM Subsystem Reset: Not Supported 00:07:48.379 Command Sets Supported 00:07:48.379 NVM Command Set: Supported 00:07:48.379 Boot Partition: Not Supported 00:07:48.379 Memory Page Size Minimum: 4096 bytes 00:07:48.379 Memory Page Size Maximum: 65536 bytes 00:07:48.379 Persistent Memory Region: Not Supported 00:07:48.379 Optional Asynchronous Events Supported 00:07:48.379 Namespace Attribute Notices: Supported 00:07:48.379 Firmware Activation Notices: Not Supported 00:07:48.379 ANA Change Notices: Not Supported 00:07:48.379 PLE Aggregate Log Change Notices: Not Supported 00:07:48.379 LBA Status Info Alert Notices: Not Supported 00:07:48.379 EGE Aggregate Log Change Notices: Not Supported 00:07:48.379 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.379 Zone Descriptor Change Notices: Not Supported 00:07:48.379 Discovery Log Change Notices: Not Supported 00:07:48.379 Controller Attributes 00:07:48.379 128-bit Host Identifier: Not Supported 00:07:48.379 Non-Operational Permissive Mode: Not Supported 00:07:48.379 NVM Sets: Not Supported 00:07:48.379 Read Recovery Levels: Not Supported 00:07:48.379 Endurance Groups: Not Supported 00:07:48.379 Predictable Latency Mode: Not Supported 00:07:48.379 Traffic Based Keep ALive: Not Supported 00:07:48.379 Namespace Granularity: Not Supported 00:07:48.379 SQ Associations: Not Supported 00:07:48.379 UUID List: Not Supported 00:07:48.379 Multi-Domain Subsystem: Not Supported 00:07:48.379 Fixed Capacity Management: Not Supported 00:07:48.379 Variable Capacity Management: Not Supported 00:07:48.379 Delete Endurance Group: Not Supported 00:07:48.379 Delete NVM Set: Not Supported 00:07:48.379 Extended LBA Formats Supported: Supported 00:07:48.379 Flexible Data Placement Supported: Not Supported 00:07:48.379 00:07:48.379 Controller Memory Buffer Support 00:07:48.379 ================================ 00:07:48.379 Supported: No 00:07:48.379 00:07:48.379 Persistent Memory Region Support 00:07:48.379 ================================ 00:07:48.379 Supported: No 00:07:48.379 00:07:48.379 Admin Command Set Attributes 00:07:48.379 ============================ 00:07:48.379 Security Send/Receive: Not Supported 00:07:48.379 Format NVM: Supported 00:07:48.379 Firmware Activate/Download: Not Supported 00:07:48.379 Namespace Management: Supported 00:07:48.379 Device Self-Test: Not Supported 00:07:48.379 Directives: Supported 00:07:48.379 NVMe-MI: Not Supported 00:07:48.379 Virtualization Management: Not Supported 00:07:48.379 Doorbell Buffer Config: Supported 00:07:48.379 Get LBA Status Capability: Not Supported 00:07:48.379 Command & Feature Lockdown Capability: Not Supported 00:07:48.379 Abort Command Limit: 4 00:07:48.379 Async Event Request Limit: 4 00:07:48.379 Number of Firmware Slots: N/A 00:07:48.379 Firmware Slot 1 Read-Only: N/A 00:07:48.379 Firmware Activation Without Reset: N/A 00:07:48.379 Multiple Update Detection Support: N/A 00:07:48.379 Firmware Update Granularity: No Information Provided 00:07:48.379 Per-Namespace SMART Log: Yes 00:07:48.379 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.379 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:48.379 Command Effects Log Page: Supported 00:07:48.379 Get Log Page Extended Data: Supported 00:07:48.379 Telemetry Log Pages: Not Supported 00:07:48.379 Persistent Event Log Pages: Not Supported 00:07:48.379 Supported Log Pages Log Page: May Support 00:07:48.379 Commands Supported & Effects Log Page: Not Supported 00:07:48.379 Feature Identifiers & Effects Log Page:May Support 00:07:48.379 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.379 Data Area 4 for Telemetry Log: Not Supported 00:07:48.379 Error Log Page Entries Supported: 1 00:07:48.379 Keep Alive: Not Supported 00:07:48.379 00:07:48.380 NVM Command Set Attributes 00:07:48.380 ========================== 00:07:48.380 Submission Queue Entry Size 00:07:48.380 Max: 64 00:07:48.380 Min: 64 00:07:48.380 Completion Queue Entry Size 00:07:48.380 Max: 16 00:07:48.380 Min: 16 00:07:48.380 Number of Namespaces: 256 00:07:48.380 Compare Command: Supported 00:07:48.380 Write Uncorrectable Command: Not Supported 00:07:48.380 Dataset Management Command: Supported 00:07:48.380 Write Zeroes Command: Supported 00:07:48.380 Set Features Save Field: Supported 00:07:48.380 Reservations: Not Supported 00:07:48.380 Timestamp: Supported 00:07:48.380 Copy: Supported 00:07:48.380 Volatile Write Cache: Present 00:07:48.380 Atomic Write Unit (Normal): 1 00:07:48.380 Atomic Write Unit (PFail): 1 00:07:48.380 Atomic Compare & Write Unit: 1 00:07:48.380 Fused Compare & Write: Not Supported 00:07:48.380 Scatter-Gather List 00:07:48.380 SGL Command Set: Supported 00:07:48.380 SGL Keyed: Not Supported 00:07:48.380 SGL Bit Bucket Descriptor: Not Supported 00:07:48.380 SGL Metadata Pointer: Not Supported 00:07:48.380 Oversized SGL: Not Supported 00:07:48.380 SGL Metadata Address: Not Supported 00:07:48.380 SGL Offset: Not Supported 00:07:48.380 Transport SGL Data Block: Not Supported 00:07:48.380 Replay Protected Memory Block: Not Supported 00:07:48.380 00:07:48.380 Firmware Slot Information 00:07:48.380 ========================= 00:07:48.380 Active slot: 1 00:07:48.380 Slot 1 Firmware Revision: 1.0 00:07:48.380 00:07:48.380 00:07:48.380 Commands Supported and Effects 00:07:48.380 ============================== 00:07:48.380 Admin Commands 00:07:48.380 -------------- 00:07:48.380 Delete I/O Submission Queue (00h): Supported 00:07:48.380 Create I/O Submission Queue (01h): Supported 00:07:48.380 Get Log Page (02h): Supported 00:07:48.380 Delete I/O Completion Queue (04h): Supported 00:07:48.380 Create I/O Completion Queue (05h): Supported 00:07:48.380 Identify (06h): Supported 00:07:48.380 Abort (08h): Supported 00:07:48.380 Set Features (09h): Supported 00:07:48.380 Get Features (0Ah): Supported 00:07:48.380 Asynchronous Event Request (0Ch): Supported 00:07:48.380 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.380 Directive Send (19h): Supported 00:07:48.380 Directive Receive (1Ah): Supported 00:07:48.380 Virtualization Management (1Ch): Supported 00:07:48.380 Doorbell Buffer Config (7Ch): Supported 00:07:48.380 Format NVM (80h): Supported LBA-Change 00:07:48.380 I/O Commands 00:07:48.380 ------------ 00:07:48.380 Flush (00h): Supported LBA-Change 00:07:48.380 Write (01h): Supported LBA-Change 00:07:48.380 Read (02h): Supported 00:07:48.380 Compare (05h): Supported 00:07:48.380 Write Zeroes (08h): Supported LBA-Change 00:07:48.380 Dataset Management (09h): Supported LBA-Change 00:07:48.380 Unknown (0Ch): Supported 00:07:48.380 Unknown (12h): Supported 00:07:48.380 Copy (19h): Supported LBA-Change 00:07:48.380 Unknown (1Dh): Supported LBA-Change 00:07:48.380 00:07:48.380 Error Log 00:07:48.380 ========= 00:07:48.380 00:07:48.380 Arbitration 00:07:48.380 =========== 00:07:48.380 Arbitration Burst: no limit 00:07:48.380 00:07:48.380 Power Management 00:07:48.380 ================ 00:07:48.380 Number of Power States: 1 00:07:48.380 Current Power State: Power State #0 00:07:48.380 Power State #0: 00:07:48.380 Max Power: 25.00 W 00:07:48.380 Non-Operational State: Operational 00:07:48.380 Entry Latency: 16 microseconds 00:07:48.380 Exit Latency: 4 microseconds 00:07:48.380 Relative Read Throughput: 0 00:07:48.380 Relative Read Latency: 0 00:07:48.380 Relative Write Throughput: 0 00:07:48.380 Relative Write Latency: 0 00:07:48.380 Idle Power[2024-12-16 21:13:37.948369] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76184 terminated unexpected 00:07:48.380 : Not Reported 00:07:48.380 Active Power: Not Reported 00:07:48.380 Non-Operational Permissive Mode: Not Supported 00:07:48.380 00:07:48.380 Health Information 00:07:48.380 ================== 00:07:48.380 Critical Warnings: 00:07:48.380 Available Spare Space: OK 00:07:48.380 Temperature: OK 00:07:48.380 Device Reliability: OK 00:07:48.380 Read Only: No 00:07:48.380 Volatile Memory Backup: OK 00:07:48.380 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.380 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.380 Available Spare: 0% 00:07:48.380 Available Spare Threshold: 0% 00:07:48.380 Life Percentage Used: 0% 00:07:48.380 Data Units Read: 715 00:07:48.380 Data Units Written: 643 00:07:48.380 Host Read Commands: 39688 00:07:48.380 Host Write Commands: 39474 00:07:48.380 Controller Busy Time: 0 minutes 00:07:48.380 Power Cycles: 0 00:07:48.380 Power On Hours: 0 hours 00:07:48.380 Unsafe Shutdowns: 0 00:07:48.380 Unrecoverable Media Errors: 0 00:07:48.380 Lifetime Error Log Entries: 0 00:07:48.380 Warning Temperature Time: 0 minutes 00:07:48.380 Critical Temperature Time: 0 minutes 00:07:48.380 00:07:48.380 Number of Queues 00:07:48.380 ================ 00:07:48.380 Number of I/O Submission Queues: 64 00:07:48.380 Number of I/O Completion Queues: 64 00:07:48.380 00:07:48.380 ZNS Specific Controller Data 00:07:48.380 ============================ 00:07:48.380 Zone Append Size Limit: 0 00:07:48.380 00:07:48.380 00:07:48.380 Active Namespaces 00:07:48.380 ================= 00:07:48.380 Namespace ID:1 00:07:48.380 Error Recovery Timeout: Unlimited 00:07:48.380 Command Set Identifier: NVM (00h) 00:07:48.380 Deallocate: Supported 00:07:48.380 Deallocated/Unwritten Error: Supported 00:07:48.380 Deallocated Read Value: All 0x00 00:07:48.380 Deallocate in Write Zeroes: Not Supported 00:07:48.380 Deallocated Guard Field: 0xFFFF 00:07:48.380 Flush: Supported 00:07:48.380 Reservation: Not Supported 00:07:48.380 Metadata Transferred as: Separate Metadata Buffer 00:07:48.380 Namespace Sharing Capabilities: Private 00:07:48.380 Size (in LBAs): 1548666 (5GiB) 00:07:48.380 Capacity (in LBAs): 1548666 (5GiB) 00:07:48.380 Utilization (in LBAs): 1548666 (5GiB) 00:07:48.380 Thin Provisioning: Not Supported 00:07:48.380 Per-NS Atomic Units: No 00:07:48.380 Maximum Single Source Range Length: 128 00:07:48.380 Maximum Copy Length: 128 00:07:48.380 Maximum Source Range Count: 128 00:07:48.380 NGUID/EUI64 Never Reused: No 00:07:48.380 Namespace Write Protected: No 00:07:48.380 Number of LBA Formats: 8 00:07:48.380 Current LBA Format: LBA Format #07 00:07:48.380 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.380 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.380 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.380 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.380 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.380 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.380 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.380 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.380 00:07:48.380 NVM Specific Namespace Data 00:07:48.380 =========================== 00:07:48.380 Logical Block Storage Tag Mask: 0 00:07:48.380 Protection Information Capabilities: 00:07:48.380 16b Guard Protection Information Storage Tag Support: No 00:07:48.380 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.380 Storage Tag Check Read Support: No 00:07:48.380 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.380 ===================================================== 00:07:48.380 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.380 ===================================================== 00:07:48.380 Controller Capabilities/Features 00:07:48.380 ================================ 00:07:48.380 Vendor ID: 1b36 00:07:48.380 Subsystem Vendor ID: 1af4 00:07:48.380 Serial Number: 12341 00:07:48.380 Model Number: QEMU NVMe Ctrl 00:07:48.380 Firmware Version: 8.0.0 00:07:48.380 Recommended Arb Burst: 6 00:07:48.380 IEEE OUI Identifier: 00 54 52 00:07:48.380 Multi-path I/O 00:07:48.380 May have multiple subsystem ports: No 00:07:48.380 May have multiple controllers: No 00:07:48.380 Associated with SR-IOV VF: No 00:07:48.380 Max Data Transfer Size: 524288 00:07:48.380 Max Number of Namespaces: 256 00:07:48.380 Max Number of I/O Queues: 64 00:07:48.380 NVMe Specification Version (VS): 1.4 00:07:48.380 NVMe Specification Version (Identify): 1.4 00:07:48.380 Maximum Queue Entries: 2048 00:07:48.380 Contiguous Queues Required: Yes 00:07:48.380 Arbitration Mechanisms Supported 00:07:48.380 Weighted Round Robin: Not Supported 00:07:48.380 Vendor Specific: Not Supported 00:07:48.381 Reset Timeout: 7500 ms 00:07:48.381 Doorbell Stride: 4 bytes 00:07:48.381 NVM Subsystem Reset: Not Supported 00:07:48.381 Command Sets Supported 00:07:48.381 NVM Command Set: Supported 00:07:48.381 Boot Partition: Not Supported 00:07:48.381 Memory Page Size Minimum: 4096 bytes 00:07:48.381 Memory Page Size Maximum: 65536 bytes 00:07:48.381 Persistent Memory Region: Not Supported 00:07:48.381 Optional Asynchronous Events Supported 00:07:48.381 Namespace Attribute Notices: Supported 00:07:48.381 Firmware Activation Notices: Not Supported 00:07:48.381 ANA Change Notices: Not Supported 00:07:48.381 PLE Aggregate Log Change Notices: Not Supported 00:07:48.381 LBA Status Info Alert Notices: Not Supported 00:07:48.381 EGE Aggregate Log Change Notices: Not Supported 00:07:48.381 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.381 Zone Descriptor Change Notices: Not Supported 00:07:48.381 Discovery Log Change Notices: Not Supported 00:07:48.381 Controller Attributes 00:07:48.381 128-bit Host Identifier: Not Supported 00:07:48.381 Non-Operational Permissive Mode: Not Supported 00:07:48.381 NVM Sets: Not Supported 00:07:48.381 Read Recovery Levels: Not Supported 00:07:48.381 Endurance Groups: Not Supported 00:07:48.381 Predictable Latency Mode: Not Supported 00:07:48.381 Traffic Based Keep ALive: Not Supported 00:07:48.381 Namespace Granularity: Not Supported 00:07:48.381 SQ Associations: Not Supported 00:07:48.381 UUID List: Not Supported 00:07:48.381 Multi-Domain Subsystem: Not Supported 00:07:48.381 Fixed Capacity Management: Not Supported 00:07:48.381 Variable Capacity Management: Not Supported 00:07:48.381 Delete Endurance Group: Not Supported 00:07:48.381 Delete NVM Set: Not Supported 00:07:48.381 Extended LBA Formats Supported: Supported 00:07:48.381 Flexible Data Placement Supported: Not Supported 00:07:48.381 00:07:48.381 Controller Memory Buffer Support 00:07:48.381 ================================ 00:07:48.381 Supported: No 00:07:48.381 00:07:48.381 Persistent Memory Region Support 00:07:48.381 ================================ 00:07:48.381 Supported: No 00:07:48.381 00:07:48.381 Admin Command Set Attributes 00:07:48.381 ============================ 00:07:48.381 Security Send/Receive: Not Supported 00:07:48.381 Format NVM: Supported 00:07:48.381 Firmware Activate/Download: Not Supported 00:07:48.381 Namespace Management: Supported 00:07:48.381 Device Self-Test: Not Supported 00:07:48.381 Directives: Supported 00:07:48.381 NVMe-MI: Not Supported 00:07:48.381 Virtualization Management: Not Supported 00:07:48.381 Doorbell Buffer Config: Supported 00:07:48.381 Get LBA Status Capability: Not Supported 00:07:48.381 Command & Feature Lockdown Capability: Not Supported 00:07:48.381 Abort Command Limit: 4 00:07:48.381 Async Event Request Limit: 4 00:07:48.381 Number of Firmware Slots: N/A 00:07:48.381 Firmware Slot 1 Read-Only: N/A 00:07:48.381 Firmware Activation Without Reset: N/A 00:07:48.381 Multiple Update Detection Support: N/A 00:07:48.381 Firmware Update Granularity: No Information Provided 00:07:48.381 Per-Namespace SMART Log: Yes 00:07:48.381 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.381 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:48.381 Command Effects Log Page: Supported 00:07:48.381 Get Log Page Extended Data: Supported 00:07:48.381 Telemetry Log Pages: Not Supported 00:07:48.381 Persistent Event Log Pages: Not Supported 00:07:48.381 Supported Log Pages Log Page: May Support 00:07:48.381 Commands Supported & Effects Log Page: Not Supported 00:07:48.381 Feature Identifiers & Effects Log Page:May Support 00:07:48.381 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.381 Data Area 4 for Telemetry Log: Not Supported 00:07:48.381 Error Log Page Entries Supported: 1 00:07:48.381 Keep Alive: Not Supported 00:07:48.381 00:07:48.381 NVM Command Set Attributes 00:07:48.381 ========================== 00:07:48.381 Submission Queue Entry Size 00:07:48.381 Max: 64 00:07:48.381 Min: 64 00:07:48.381 Completion Queue Entry Size 00:07:48.381 Max: 16 00:07:48.381 Min: 16 00:07:48.381 Number of Namespaces: 256 00:07:48.381 Compare Command: Supported 00:07:48.381 Write Uncorrectable Command: Not Supported 00:07:48.381 Dataset Management Command: Supported 00:07:48.381 Write Zeroes Command: Supported 00:07:48.381 Set Features Save Field: Supported 00:07:48.381 Reservations: Not Supported 00:07:48.381 Timestamp: Supported 00:07:48.381 Copy: Supported 00:07:48.381 Volatile Write Cache: Present 00:07:48.381 Atomic Write Unit (Normal): 1 00:07:48.381 Atomic Write Unit (PFail): 1 00:07:48.381 Atomic Compare & Write Unit: 1 00:07:48.381 Fused Compare & Write: Not Supported 00:07:48.381 Scatter-Gather List 00:07:48.381 SGL Command Set: Supported 00:07:48.381 SGL Keyed: Not Supported 00:07:48.381 SGL Bit Bucket Descriptor: Not Supported 00:07:48.381 SGL Metadata Pointer: Not Supported 00:07:48.381 Oversized SGL: Not Supported 00:07:48.381 SGL Metadata Address: Not Supported 00:07:48.381 SGL Offset: Not Supported 00:07:48.381 Transport SGL Data Block: Not Supported 00:07:48.381 Replay Protected Memory Block: Not Supported 00:07:48.381 00:07:48.381 Firmware Slot Information 00:07:48.381 ========================= 00:07:48.381 Active slot: 1 00:07:48.381 Slot 1 Firmware Revision: 1.0 00:07:48.381 00:07:48.381 00:07:48.381 Commands Supported and Effects 00:07:48.381 ============================== 00:07:48.381 Admin Commands 00:07:48.381 -------------- 00:07:48.381 Delete I/O Submission Queue (00h): Supported 00:07:48.381 Create I/O Submission Queue (01h): Supported 00:07:48.381 Get Log Page (02h): Supported 00:07:48.381 Delete I/O Completion Queue (04h): Supported 00:07:48.381 Create I/O Completion Queue (05h): Supported 00:07:48.381 Identify (06h): Supported 00:07:48.381 Abort (08h): Supported 00:07:48.381 Set Features (09h): Supported 00:07:48.381 Get Features (0Ah): Supported 00:07:48.381 Asynchronous Event Request (0Ch): Supported 00:07:48.381 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.381 Directive Send (19h): Supported 00:07:48.381 Directive Receive (1Ah): Supported 00:07:48.381 Virtualization Management (1Ch): Supported 00:07:48.381 Doorbell Buffer Config (7Ch): Supported 00:07:48.381 Format NVM (80h): Supported LBA-Change 00:07:48.381 I/O Commands 00:07:48.381 ------------ 00:07:48.381 Flush (00h): Supported LBA-Change 00:07:48.381 Write (01h): Supported LBA-Change 00:07:48.381 Read (02h): Supported 00:07:48.381 Compare (05h): Supported 00:07:48.381 Write Zeroes (08h): Supported LBA-Change 00:07:48.381 Dataset Management (09h): Supported LBA-Change 00:07:48.381 Unknown (0Ch): Supported 00:07:48.381 Unknown (12h): Supported 00:07:48.381 Copy (19h): Supported LBA-Change 00:07:48.381 Unknown (1Dh): Supported LBA-Change 00:07:48.381 00:07:48.381 Error Log 00:07:48.381 ========= 00:07:48.381 00:07:48.381 Arbitration 00:07:48.381 =========== 00:07:48.381 Arbitration Burst: no limit 00:07:48.381 00:07:48.381 Power Management 00:07:48.381 ================ 00:07:48.381 Number of Power States: 1 00:07:48.381 Current Power State: Power State #0 00:07:48.381 Power State #0: 00:07:48.381 Max Power: 25.00 W 00:07:48.381 Non-Operational State: Operational 00:07:48.381 Entry Latency: 16 microseconds 00:07:48.381 Exit Latency: 4 microseconds 00:07:48.381 Relative Read Throughput: 0 00:07:48.381 Relative Read Latency: 0 00:07:48.381 Relative Write Throughput: 0 00:07:48.381 Relative Write Latency: 0 00:07:48.381 Idle Power: Not Reported 00:07:48.381 Active Power: Not Reported 00:07:48.381 Non-Operational Permissive Mode: Not Supported 00:07:48.381 00:07:48.381 Health Information 00:07:48.381 ================== 00:07:48.381 Critical Warnings: 00:07:48.381 Available Spare Space: OK 00:07:48.381 Temperature: [2024-12-16 21:13:37.949009] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76184 terminated unexpected 00:07:48.381 OK 00:07:48.381 Device Reliability: OK 00:07:48.381 Read Only: No 00:07:48.381 Volatile Memory Backup: OK 00:07:48.381 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.381 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.381 Available Spare: 0% 00:07:48.381 Available Spare Threshold: 0% 00:07:48.381 Life Percentage Used: 0% 00:07:48.381 Data Units Read: 1082 00:07:48.381 Data Units Written: 948 00:07:48.381 Host Read Commands: 60714 00:07:48.381 Host Write Commands: 59500 00:07:48.381 Controller Busy Time: 0 minutes 00:07:48.381 Power Cycles: 0 00:07:48.381 Power On Hours: 0 hours 00:07:48.381 Unsafe Shutdowns: 0 00:07:48.381 Unrecoverable Media Errors: 0 00:07:48.381 Lifetime Error Log Entries: 0 00:07:48.381 Warning Temperature Time: 0 minutes 00:07:48.381 Critical Temperature Time: 0 minutes 00:07:48.381 00:07:48.381 Number of Queues 00:07:48.381 ================ 00:07:48.381 Number of I/O Submission Queues: 64 00:07:48.381 Number of I/O Completion Queues: 64 00:07:48.381 00:07:48.381 ZNS Specific Controller Data 00:07:48.381 ============================ 00:07:48.381 Zone Append Size Limit: 0 00:07:48.381 00:07:48.381 00:07:48.382 Active Namespaces 00:07:48.382 ================= 00:07:48.382 Namespace ID:1 00:07:48.382 Error Recovery Timeout: Unlimited 00:07:48.382 Command Set Identifier: NVM (00h) 00:07:48.382 Deallocate: Supported 00:07:48.382 Deallocated/Unwritten Error: Supported 00:07:48.382 Deallocated Read Value: All 0x00 00:07:48.382 Deallocate in Write Zeroes: Not Supported 00:07:48.382 Deallocated Guard Field: 0xFFFF 00:07:48.382 Flush: Supported 00:07:48.382 Reservation: Not Supported 00:07:48.382 Namespace Sharing Capabilities: Private 00:07:48.382 Size (in LBAs): 1310720 (5GiB) 00:07:48.382 Capacity (in LBAs): 1310720 (5GiB) 00:07:48.382 Utilization (in LBAs): 1310720 (5GiB) 00:07:48.382 Thin Provisioning: Not Supported 00:07:48.382 Per-NS Atomic Units: No 00:07:48.382 Maximum Single Source Range Length: 128 00:07:48.382 Maximum Copy Length: 128 00:07:48.382 Maximum Source Range Count: 128 00:07:48.382 NGUID/EUI64 Never Reused: No 00:07:48.382 Namespace Write Protected: No 00:07:48.382 Number of LBA Formats: 8 00:07:48.382 Current LBA Format: LBA Format #04 00:07:48.382 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.382 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.382 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.382 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.382 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.382 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.382 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.382 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.382 00:07:48.382 NVM Specific Namespace Data 00:07:48.382 =========================== 00:07:48.382 Logical Block Storage Tag Mask: 0 00:07:48.382 Protection Information Capabilities: 00:07:48.382 16b Guard Protection Information Storage Tag Support: No 00:07:48.382 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.382 Storage Tag Check Read Support: No 00:07:48.382 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.382 ===================================================== 00:07:48.382 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.382 ===================================================== 00:07:48.382 Controller Capabilities/Features 00:07:48.382 ================================ 00:07:48.382 Vendor ID: 1b36 00:07:48.382 Subsystem Vendor ID: 1af4 00:07:48.382 Serial Number: 12343 00:07:48.382 Model Number: QEMU NVMe Ctrl 00:07:48.382 Firmware Version: 8.0.0 00:07:48.382 Recommended Arb Burst: 6 00:07:48.382 IEEE OUI Identifier: 00 54 52 00:07:48.382 Multi-path I/O 00:07:48.382 May have multiple subsystem ports: No 00:07:48.382 May have multiple controllers: Yes 00:07:48.382 Associated with SR-IOV VF: No 00:07:48.382 Max Data Transfer Size: 524288 00:07:48.382 Max Number of Namespaces: 256 00:07:48.382 Max Number of I/O Queues: 64 00:07:48.382 NVMe Specification Version (VS): 1.4 00:07:48.382 NVMe Specification Version (Identify): 1.4 00:07:48.382 Maximum Queue Entries: 2048 00:07:48.382 Contiguous Queues Required: Yes 00:07:48.382 Arbitration Mechanisms Supported 00:07:48.382 Weighted Round Robin: Not Supported 00:07:48.382 Vendor Specific: Not Supported 00:07:48.382 Reset Timeout: 7500 ms 00:07:48.382 Doorbell Stride: 4 bytes 00:07:48.382 NVM Subsystem Reset: Not Supported 00:07:48.382 Command Sets Supported 00:07:48.382 NVM Command Set: Supported 00:07:48.382 Boot Partition: Not Supported 00:07:48.382 Memory Page Size Minimum: 4096 bytes 00:07:48.382 Memory Page Size Maximum: 65536 bytes 00:07:48.382 Persistent Memory Region: Not Supported 00:07:48.382 Optional Asynchronous Events Supported 00:07:48.382 Namespace Attribute Notices: Supported 00:07:48.382 Firmware Activation Notices: Not Supported 00:07:48.382 ANA Change Notices: Not Supported 00:07:48.382 PLE Aggregate Log Change Notices: Not Supported 00:07:48.382 LBA Status Info Alert Notices: Not Supported 00:07:48.382 EGE Aggregate Log Change Notices: Not Supported 00:07:48.382 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.382 Zone Descriptor Change Notices: Not Supported 00:07:48.382 Discovery Log Change Notices: Not Supported 00:07:48.382 Controller Attributes 00:07:48.382 128-bit Host Identifier: Not Supported 00:07:48.382 Non-Operational Permissive Mode: Not Supported 00:07:48.382 NVM Sets: Not Supported 00:07:48.382 Read Recovery Levels: Not Supported 00:07:48.382 Endurance Groups: Supported 00:07:48.382 Predictable Latency Mode: Not Supported 00:07:48.382 Traffic Based Keep ALive: Not Supported 00:07:48.382 Namespace Granularity: Not Supported 00:07:48.382 SQ Associations: Not Supported 00:07:48.382 UUID List: Not Supported 00:07:48.382 Multi-Domain Subsystem: Not Supported 00:07:48.382 Fixed Capacity Management: Not Supported 00:07:48.382 Variable Capacity Management: Not Supported 00:07:48.382 Delete Endurance Group: Not Supported 00:07:48.382 Delete NVM Set: Not Supported 00:07:48.382 Extended LBA Formats Supported: Supported 00:07:48.382 Flexible Data Placement Supported: Supported 00:07:48.382 00:07:48.382 Controller Memory Buffer Support 00:07:48.382 ================================ 00:07:48.382 Supported: No 00:07:48.382 00:07:48.382 Persistent Memory Region Support 00:07:48.382 ================================ 00:07:48.382 Supported: No 00:07:48.382 00:07:48.382 Admin Command Set Attributes 00:07:48.382 ============================ 00:07:48.382 Security Send/Receive: Not Supported 00:07:48.382 Format NVM: Supported 00:07:48.382 Firmware Activate/Download: Not Supported 00:07:48.382 Namespace Management: Supported 00:07:48.382 Device Self-Test: Not Supported 00:07:48.382 Directives: Supported 00:07:48.382 NVMe-MI: Not Supported 00:07:48.382 Virtualization Management: Not Supported 00:07:48.382 Doorbell Buffer Config: Supported 00:07:48.382 Get LBA Status Capability: Not Supported 00:07:48.382 Command & Feature Lockdown Capability: Not Supported 00:07:48.382 Abort Command Limit: 4 00:07:48.382 Async Event Request Limit: 4 00:07:48.382 Number of Firmware Slots: N/A 00:07:48.382 Firmware Slot 1 Read-Only: N/A 00:07:48.382 Firmware Activation Without Reset: N/A 00:07:48.382 Multiple Update Detection Support: N/A 00:07:48.382 Firmware Update Granularity: No Information Provided 00:07:48.382 Per-Namespace SMART Log: Yes 00:07:48.382 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.382 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:48.382 Command Effects Log Page: Supported 00:07:48.382 Get Log Page Extended Data: Supported 00:07:48.382 Telemetry Log Pages: Not Supported 00:07:48.382 Persistent Event Log Pages: Not Supported 00:07:48.382 Supported Log Pages Log Page: May Support 00:07:48.382 Commands Supported & Effects Log Page: Not Supported 00:07:48.382 Feature Identifiers & Effects Log Page:May Support 00:07:48.382 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.382 Data Area 4 for Telemetry Log: Not Supported 00:07:48.382 Error Log Page Entries Supported: 1 00:07:48.382 Keep Alive: Not Supported 00:07:48.382 00:07:48.382 NVM Command Set Attributes 00:07:48.382 ========================== 00:07:48.382 Submission Queue Entry Size 00:07:48.382 Max: 64 00:07:48.382 Min: 64 00:07:48.382 Completion Queue Entry Size 00:07:48.382 Max: 16 00:07:48.382 Min: 16 00:07:48.382 Number of Namespaces: 256 00:07:48.382 Compare Command: Supported 00:07:48.382 Write Uncorrectable Command: Not Supported 00:07:48.382 Dataset Management Command: Supported 00:07:48.382 Write Zeroes Command: Supported 00:07:48.382 Set Features Save Field: Supported 00:07:48.382 Reservations: Not Supported 00:07:48.382 Timestamp: Supported 00:07:48.382 Copy: Supported 00:07:48.382 Volatile Write Cache: Present 00:07:48.382 Atomic Write Unit (Normal): 1 00:07:48.382 Atomic Write Unit (PFail): 1 00:07:48.382 Atomic Compare & Write Unit: 1 00:07:48.382 Fused Compare & Write: Not Supported 00:07:48.382 Scatter-Gather List 00:07:48.382 SGL Command Set: Supported 00:07:48.382 SGL Keyed: Not Supported 00:07:48.382 SGL Bit Bucket Descriptor: Not Supported 00:07:48.382 SGL Metadata Pointer: Not Supported 00:07:48.382 Oversized SGL: Not Supported 00:07:48.382 SGL Metadata Address: Not Supported 00:07:48.382 SGL Offset: Not Supported 00:07:48.382 Transport SGL Data Block: Not Supported 00:07:48.382 Replay Protected Memory Block: Not Supported 00:07:48.382 00:07:48.382 Firmware Slot Information 00:07:48.382 ========================= 00:07:48.382 Active slot: 1 00:07:48.382 Slot 1 Firmware Revision: 1.0 00:07:48.382 00:07:48.382 00:07:48.382 Commands Supported and Effects 00:07:48.382 ============================== 00:07:48.382 Admin Commands 00:07:48.383 -------------- 00:07:48.383 Delete I/O Submission Queue (00h): Supported 00:07:48.383 Create I/O Submission Queue (01h): Supported 00:07:48.383 Get Log Page (02h): Supported 00:07:48.383 Delete I/O Completion Queue (04h): Supported 00:07:48.383 Create I/O Completion Queue (05h): Supported 00:07:48.383 Identify (06h): Supported 00:07:48.383 Abort (08h): Supported 00:07:48.383 Set Features (09h): Supported 00:07:48.383 Get Features (0Ah): Supported 00:07:48.383 Asynchronous Event Request (0Ch): Supported 00:07:48.383 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.383 Directive Send (19h): Supported 00:07:48.383 Directive Receive (1Ah): Supported 00:07:48.383 Virtualization Management (1Ch): Supported 00:07:48.383 Doorbell Buffer Config (7Ch): Supported 00:07:48.383 Format NVM (80h): Supported LBA-Change 00:07:48.383 I/O Commands 00:07:48.383 ------------ 00:07:48.383 Flush (00h): Supported LBA-Change 00:07:48.383 Write (01h): Supported LBA-Change 00:07:48.383 Read (02h): Supported 00:07:48.383 Compare (05h): Supported 00:07:48.383 Write Zeroes (08h): Supported LBA-Change 00:07:48.383 Dataset Management (09h): Supported LBA-Change 00:07:48.383 Unknown (0Ch): Supported 00:07:48.383 Unknown (12h): Supported 00:07:48.383 Copy (19h): Supported LBA-Change 00:07:48.383 Unknown (1Dh): Supported LBA-Change 00:07:48.383 00:07:48.383 Error Log 00:07:48.383 ========= 00:07:48.383 00:07:48.383 Arbitration 00:07:48.383 =========== 00:07:48.383 Arbitration Burst: no limit 00:07:48.383 00:07:48.383 Power Management 00:07:48.383 ================ 00:07:48.383 Number of Power States: 1 00:07:48.383 Current Power State: Power State #0 00:07:48.383 Power State #0: 00:07:48.383 Max Power: 25.00 W 00:07:48.383 Non-Operational State: Operational 00:07:48.383 Entry Latency: 16 microseconds 00:07:48.383 Exit Latency: 4 microseconds 00:07:48.383 Relative Read Throughput: 0 00:07:48.383 Relative Read Latency: 0 00:07:48.383 Relative Write Throughput: 0 00:07:48.383 Relative Write Latency: 0 00:07:48.383 Idle Power: Not Reported 00:07:48.383 Active Power: Not Reported 00:07:48.383 Non-Operational Permissive Mode: Not Supported 00:07:48.383 00:07:48.383 Health Information 00:07:48.383 ================== 00:07:48.383 Critical Warnings: 00:07:48.383 Available Spare Space: OK 00:07:48.383 Temperature: OK 00:07:48.383 Device Reliability: OK 00:07:48.383 Read Only: No 00:07:48.383 Volatile Memory Backup: OK 00:07:48.383 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.383 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.383 Available Spare: 0% 00:07:48.383 Available Spare Threshold: 0% 00:07:48.383 Life Percentage Used: 0% 00:07:48.383 Data Units Read: 925 00:07:48.383 Data Units Written: 854 00:07:48.383 Host Read Commands: 41862 00:07:48.383 Host Write Commands: 41285 00:07:48.383 Controller Busy Time: 0 minutes 00:07:48.383 Power Cycles: 0 00:07:48.383 Power On Hours: 0 hours 00:07:48.383 Unsafe Shutdowns: 0 00:07:48.383 Unrecoverable Media Errors: 0 00:07:48.383 Lifetime Error Log Entries: 0 00:07:48.383 Warning Temperature Time: 0 minutes 00:07:48.383 Critical Temperature Time: 0 minutes 00:07:48.383 00:07:48.383 Number of Queues 00:07:48.383 ================ 00:07:48.383 Number of I/O Submission Queues: 64 00:07:48.383 Number of I/O Completion Queues: 64 00:07:48.383 00:07:48.383 ZNS Specific Controller Data 00:07:48.383 ============================ 00:07:48.383 Zone Append Size Limit: 0 00:07:48.383 00:07:48.383 00:07:48.383 Active Namespaces 00:07:48.383 ================= 00:07:48.383 Namespace ID:1 00:07:48.383 Error Recovery Timeout: Unlimited 00:07:48.383 Command Set Identifier: NVM (00h) 00:07:48.383 Deallocate: Supported 00:07:48.383 Deallocated/Unwritten Error: Supported 00:07:48.383 Deallocated Read Value: All 0x00 00:07:48.383 Deallocate in Write Zeroes: Not Supported 00:07:48.383 Deallocated Guard Field: 0xFFFF 00:07:48.383 Flush: Supported 00:07:48.383 Reservation: Not Supported 00:07:48.383 Namespace Sharing Capabilities: Multiple Controllers 00:07:48.383 Size (in LBAs): 262144 (1GiB) 00:07:48.383 Capacity (in LBAs): 262144 (1GiB) 00:07:48.383 Utilization (in LBAs): 262144 (1GiB) 00:07:48.383 Thin Provisioning: Not Supported 00:07:48.383 Per-NS Atomic Units: No 00:07:48.383 Maximum Single Source Range Length: 128 00:07:48.383 Maximum Copy Length: 128 00:07:48.383 Maximum Source Range Count: 128 00:07:48.383 NGUID/EUI64 Never Reused: No 00:07:48.383 Namespace Write Protected: No 00:07:48.383 Endurance group ID: 1 00:07:48.383 Number of LBA Formats: 8 00:07:48.383 Current LBA Format: LBA Format #04 00:07:48.383 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.383 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.383 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.383 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.383 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.383 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.383 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.383 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.383 00:07:48.383 Get Feature FDP: 00:07:48.383 ================ 00:07:48.383 Enabled: Yes 00:07:48.383 FDP configuration index: 0 00:07:48.383 00:07:48.383 FDP configurations log page 00:07:48.383 =========================== 00:07:48.383 Number of FDP configurations: 1 00:07:48.383 Version: 0 00:07:48.383 Size: 112 00:07:48.383 FDP Configuration Descriptor: 0 00:07:48.383 Descriptor Size: 96 00:07:48.383 Reclaim Group Identifier format: 2 00:07:48.383 FDP Volatile Write Cache: Not Present 00:07:48.383 FDP Configuration: Valid 00:07:48.383 Vendor Specific Size: 0 00:07:48.383 Number of Reclaim Groups: 2 00:07:48.383 Number of Recalim Unit Handles: 8 00:07:48.383 Max Placement Identifiers: 128 00:07:48.383 Number of Namespaces Suppprted: 256 00:07:48.383 Reclaim unit Nominal Size: 6000000 bytes 00:07:48.383 Estimated Reclaim Unit Time Limit: Not Reported 00:07:48.383 RUH Desc #000: RUH Type: Initially Isolated 00:07:48.383 RUH Desc #001: RUH Type: Initially Isolated 00:07:48.383 RUH Desc #002: RUH Type: Initially Isolated 00:07:48.383 RUH Desc #003: RUH Type: Initially Isolated 00:07:48.383 RUH Desc #004: RUH Type: Initially Isolated 00:07:48.383 RUH Desc #005: RUH Type: Initially Isolated 00:07:48.383 RUH Desc #006: RUH Type: Initially Isolated 00:07:48.383 RUH Desc #007: RUH Type: Initially Isolated 00:07:48.383 00:07:48.383 FDP reclaim unit handle usage log page 00:07:48.383 ====================================== 00:07:48.383 Number of Reclaim Unit Handles: 8 00:07:48.383 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:48.383 RUH Usage Desc #001: RUH Attributes: Unused 00:07:48.383 RUH Usage Desc #002: RUH Attributes: Unused 00:07:48.383 RUH Usage Desc #003: RUH Attributes: Unused 00:07:48.383 RUH Usage Desc #004: RUH Attributes: Unused 00:07:48.383 RUH Usage Desc #005: RUH Attributes: Unused 00:07:48.383 RUH Usage Desc #006: RUH Attributes: Unused 00:07:48.383 RUH Usage Desc #007: RUH Attributes: Unused 00:07:48.383 00:07:48.383 FDP statistics log page 00:07:48.383 ======================= 00:07:48.383 Host bytes with metadata written: 494051328 00:07:48.383 Medi[2024-12-16 21:13:37.950157] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76184 terminated unexpected 00:07:48.383 a bytes with metadata written: 494104576 00:07:48.383 Media bytes erased: 0 00:07:48.383 00:07:48.383 FDP events log page 00:07:48.383 =================== 00:07:48.383 Number of FDP events: 0 00:07:48.383 00:07:48.383 NVM Specific Namespace Data 00:07:48.383 =========================== 00:07:48.383 Logical Block Storage Tag Mask: 0 00:07:48.383 Protection Information Capabilities: 00:07:48.383 16b Guard Protection Information Storage Tag Support: No 00:07:48.383 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.383 Storage Tag Check Read Support: No 00:07:48.383 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.383 ===================================================== 00:07:48.383 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.383 ===================================================== 00:07:48.383 Controller Capabilities/Features 00:07:48.383 ================================ 00:07:48.383 Vendor ID: 1b36 00:07:48.383 Subsystem Vendor ID: 1af4 00:07:48.383 Serial Number: 12342 00:07:48.383 Model Number: QEMU NVMe Ctrl 00:07:48.383 Firmware Version: 8.0.0 00:07:48.383 Recommended Arb Burst: 6 00:07:48.383 IEEE OUI Identifier: 00 54 52 00:07:48.383 Multi-path I/O 00:07:48.383 May have multiple subsystem ports: No 00:07:48.384 May have multiple controllers: No 00:07:48.384 Associated with SR-IOV VF: No 00:07:48.384 Max Data Transfer Size: 524288 00:07:48.384 Max Number of Namespaces: 256 00:07:48.384 Max Number of I/O Queues: 64 00:07:48.384 NVMe Specification Version (VS): 1.4 00:07:48.384 NVMe Specification Version (Identify): 1.4 00:07:48.384 Maximum Queue Entries: 2048 00:07:48.384 Contiguous Queues Required: Yes 00:07:48.384 Arbitration Mechanisms Supported 00:07:48.384 Weighted Round Robin: Not Supported 00:07:48.384 Vendor Specific: Not Supported 00:07:48.384 Reset Timeout: 7500 ms 00:07:48.384 Doorbell Stride: 4 bytes 00:07:48.384 NVM Subsystem Reset: Not Supported 00:07:48.384 Command Sets Supported 00:07:48.384 NVM Command Set: Supported 00:07:48.384 Boot Partition: Not Supported 00:07:48.384 Memory Page Size Minimum: 4096 bytes 00:07:48.384 Memory Page Size Maximum: 65536 bytes 00:07:48.384 Persistent Memory Region: Not Supported 00:07:48.384 Optional Asynchronous Events Supported 00:07:48.384 Namespace Attribute Notices: Supported 00:07:48.384 Firmware Activation Notices: Not Supported 00:07:48.384 ANA Change Notices: Not Supported 00:07:48.384 PLE Aggregate Log Change Notices: Not Supported 00:07:48.384 LBA Status Info Alert Notices: Not Supported 00:07:48.384 EGE Aggregate Log Change Notices: Not Supported 00:07:48.384 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.384 Zone Descriptor Change Notices: Not Supported 00:07:48.384 Discovery Log Change Notices: Not Supported 00:07:48.384 Controller Attributes 00:07:48.384 128-bit Host Identifier: Not Supported 00:07:48.384 Non-Operational Permissive Mode: Not Supported 00:07:48.384 NVM Sets: Not Supported 00:07:48.384 Read Recovery Levels: Not Supported 00:07:48.384 Endurance Groups: Not Supported 00:07:48.384 Predictable Latency Mode: Not Supported 00:07:48.384 Traffic Based Keep ALive: Not Supported 00:07:48.384 Namespace Granularity: Not Supported 00:07:48.384 SQ Associations: Not Supported 00:07:48.384 UUID List: Not Supported 00:07:48.384 Multi-Domain Subsystem: Not Supported 00:07:48.384 Fixed Capacity Management: Not Supported 00:07:48.384 Variable Capacity Management: Not Supported 00:07:48.384 Delete Endurance Group: Not Supported 00:07:48.384 Delete NVM Set: Not Supported 00:07:48.384 Extended LBA Formats Supported: Supported 00:07:48.384 Flexible Data Placement Supported: Not Supported 00:07:48.384 00:07:48.384 Controller Memory Buffer Support 00:07:48.384 ================================ 00:07:48.384 Supported: No 00:07:48.384 00:07:48.384 Persistent Memory Region Support 00:07:48.384 ================================ 00:07:48.384 Supported: No 00:07:48.384 00:07:48.384 Admin Command Set Attributes 00:07:48.384 ============================ 00:07:48.384 Security Send/Receive: Not Supported 00:07:48.384 Format NVM: Supported 00:07:48.384 Firmware Activate/Download: Not Supported 00:07:48.384 Namespace Management: Supported 00:07:48.384 Device Self-Test: Not Supported 00:07:48.384 Directives: Supported 00:07:48.384 NVMe-MI: Not Supported 00:07:48.384 Virtualization Management: Not Supported 00:07:48.384 Doorbell Buffer Config: Supported 00:07:48.384 Get LBA Status Capability: Not Supported 00:07:48.384 Command & Feature Lockdown Capability: Not Supported 00:07:48.384 Abort Command Limit: 4 00:07:48.384 Async Event Request Limit: 4 00:07:48.384 Number of Firmware Slots: N/A 00:07:48.384 Firmware Slot 1 Read-Only: N/A 00:07:48.384 Firmware Activation Without Reset: N/A 00:07:48.384 Multiple Update Detection Support: N/A 00:07:48.384 Firmware Update Granularity: No Information Provided 00:07:48.384 Per-Namespace SMART Log: Yes 00:07:48.384 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.384 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:48.384 Command Effects Log Page: Supported 00:07:48.384 Get Log Page Extended Data: Supported 00:07:48.384 Telemetry Log Pages: Not Supported 00:07:48.384 Persistent Event Log Pages: Not Supported 00:07:48.384 Supported Log Pages Log Page: May Support 00:07:48.384 Commands Supported & Effects Log Page: Not Supported 00:07:48.384 Feature Identifiers & Effects Log Page:May Support 00:07:48.384 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.384 Data Area 4 for Telemetry Log: Not Supported 00:07:48.384 Error Log Page Entries Supported: 1 00:07:48.384 Keep Alive: Not Supported 00:07:48.384 00:07:48.384 NVM Command Set Attributes 00:07:48.384 ========================== 00:07:48.384 Submission Queue Entry Size 00:07:48.384 Max: 64 00:07:48.384 Min: 64 00:07:48.384 Completion Queue Entry Size 00:07:48.384 Max: 16 00:07:48.384 Min: 16 00:07:48.384 Number of Namespaces: 256 00:07:48.384 Compare Command: Supported 00:07:48.384 Write Uncorrectable Command: Not Supported 00:07:48.384 Dataset Management Command: Supported 00:07:48.384 Write Zeroes Command: Supported 00:07:48.384 Set Features Save Field: Supported 00:07:48.384 Reservations: Not Supported 00:07:48.384 Timestamp: Supported 00:07:48.384 Copy: Supported 00:07:48.384 Volatile Write Cache: Present 00:07:48.384 Atomic Write Unit (Normal): 1 00:07:48.384 Atomic Write Unit (PFail): 1 00:07:48.384 Atomic Compare & Write Unit: 1 00:07:48.384 Fused Compare & Write: Not Supported 00:07:48.384 Scatter-Gather List 00:07:48.384 SGL Command Set: Supported 00:07:48.384 SGL Keyed: Not Supported 00:07:48.384 SGL Bit Bucket Descriptor: Not Supported 00:07:48.384 SGL Metadata Pointer: Not Supported 00:07:48.384 Oversized SGL: Not Supported 00:07:48.384 SGL Metadata Address: Not Supported 00:07:48.384 SGL Offset: Not Supported 00:07:48.384 Transport SGL Data Block: Not Supported 00:07:48.384 Replay Protected Memory Block: Not Supported 00:07:48.384 00:07:48.384 Firmware Slot Information 00:07:48.384 ========================= 00:07:48.384 Active slot: 1 00:07:48.384 Slot 1 Firmware Revision: 1.0 00:07:48.384 00:07:48.384 00:07:48.384 Commands Supported and Effects 00:07:48.384 ============================== 00:07:48.384 Admin Commands 00:07:48.384 -------------- 00:07:48.384 Delete I/O Submission Queue (00h): Supported 00:07:48.384 Create I/O Submission Queue (01h): Supported 00:07:48.384 Get Log Page (02h): Supported 00:07:48.384 Delete I/O Completion Queue (04h): Supported 00:07:48.384 Create I/O Completion Queue (05h): Supported 00:07:48.384 Identify (06h): Supported 00:07:48.384 Abort (08h): Supported 00:07:48.384 Set Features (09h): Supported 00:07:48.384 Get Features (0Ah): Supported 00:07:48.384 Asynchronous Event Request (0Ch): Supported 00:07:48.384 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.384 Directive Send (19h): Supported 00:07:48.384 Directive Receive (1Ah): Supported 00:07:48.384 Virtualization Management (1Ch): Supported 00:07:48.384 Doorbell Buffer Config (7Ch): Supported 00:07:48.384 Format NVM (80h): Supported LBA-Change 00:07:48.384 I/O Commands 00:07:48.384 ------------ 00:07:48.384 Flush (00h): Supported LBA-Change 00:07:48.384 Write (01h): Supported LBA-Change 00:07:48.384 Read (02h): Supported 00:07:48.384 Compare (05h): Supported 00:07:48.384 Write Zeroes (08h): Supported LBA-Change 00:07:48.384 Dataset Management (09h): Supported LBA-Change 00:07:48.384 Unknown (0Ch): Supported 00:07:48.384 Unknown (12h): Supported 00:07:48.384 Copy (19h): Supported LBA-Change 00:07:48.384 Unknown (1Dh): Supported LBA-Change 00:07:48.384 00:07:48.384 Error Log 00:07:48.384 ========= 00:07:48.384 00:07:48.384 Arbitration 00:07:48.384 =========== 00:07:48.384 Arbitration Burst: no limit 00:07:48.384 00:07:48.384 Power Management 00:07:48.384 ================ 00:07:48.384 Number of Power States: 1 00:07:48.384 Current Power State: Power State #0 00:07:48.384 Power State #0: 00:07:48.385 Max Power: 25.00 W 00:07:48.385 Non-Operational State: Operational 00:07:48.385 Entry Latency: 16 microseconds 00:07:48.385 Exit Latency: 4 microseconds 00:07:48.385 Relative Read Throughput: 0 00:07:48.385 Relative Read Latency: 0 00:07:48.385 Relative Write Throughput: 0 00:07:48.385 Relative Write Latency: 0 00:07:48.385 Idle Power: Not Reported 00:07:48.385 Active Power: Not Reported 00:07:48.385 Non-Operational Permissive Mode: Not Supported 00:07:48.385 00:07:48.385 Health Information 00:07:48.385 ================== 00:07:48.385 Critical Warnings: 00:07:48.385 Available Spare Space: OK 00:07:48.385 Temperature: OK 00:07:48.385 Device Reliability: OK 00:07:48.385 Read Only: No 00:07:48.385 Volatile Memory Backup: OK 00:07:48.385 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.385 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.385 Available Spare: 0% 00:07:48.385 Available Spare Threshold: 0% 00:07:48.385 Life Percentage Used: 0% 00:07:48.385 Data Units Read: 2328 00:07:48.385 Data Units Written: 2115 00:07:48.385 Host Read Commands: 121566 00:07:48.385 Host Write Commands: 119835 00:07:48.385 Controller Busy Time: 0 minutes 00:07:48.385 Power Cycles: 0 00:07:48.385 Power On Hours: 0 hours 00:07:48.385 Unsafe Shutdowns: 0 00:07:48.385 Unrecoverable Media Errors: 0 00:07:48.385 Lifetime Error Log Entries: 0 00:07:48.385 Warning Temperature Time: 0 minutes 00:07:48.385 Critical Temperature Time: 0 minutes 00:07:48.385 00:07:48.385 Number of Queues 00:07:48.385 ================ 00:07:48.385 Number of I/O Submission Queues: 64 00:07:48.385 Number of I/O Completion Queues: 64 00:07:48.385 00:07:48.385 ZNS Specific Controller Data 00:07:48.385 ============================ 00:07:48.385 Zone Append Size Limit: 0 00:07:48.385 00:07:48.385 00:07:48.385 Active Namespaces 00:07:48.385 ================= 00:07:48.385 Namespace ID:1 00:07:48.385 Error Recovery Timeout: Unlimited 00:07:48.385 Command Set Identifier: NVM (00h) 00:07:48.385 Deallocate: Supported 00:07:48.385 Deallocated/Unwritten Error: Supported 00:07:48.385 Deallocated Read Value: All 0x00 00:07:48.385 Deallocate in Write Zeroes: Not Supported 00:07:48.385 Deallocated Guard Field: 0xFFFF 00:07:48.385 Flush: Supported 00:07:48.385 Reservation: Not Supported 00:07:48.385 Namespace Sharing Capabilities: Private 00:07:48.385 Size (in LBAs): 1048576 (4GiB) 00:07:48.385 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.385 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.385 Thin Provisioning: Not Supported 00:07:48.385 Per-NS Atomic Units: No 00:07:48.385 Maximum Single Source Range Length: 128 00:07:48.385 Maximum Copy Length: 128 00:07:48.385 Maximum Source Range Count: 128 00:07:48.385 NGUID/EUI64 Never Reused: No 00:07:48.385 Namespace Write Protected: No 00:07:48.385 Number of LBA Formats: 8 00:07:48.385 Current LBA Format: LBA Format #04 00:07:48.385 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.385 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.385 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.385 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.385 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.385 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.385 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.385 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.385 00:07:48.385 NVM Specific Namespace Data 00:07:48.385 =========================== 00:07:48.385 Logical Block Storage Tag Mask: 0 00:07:48.385 Protection Information Capabilities: 00:07:48.385 16b Guard Protection Information Storage Tag Support: No 00:07:48.385 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.385 Storage Tag Check Read Support: No 00:07:48.385 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Namespace ID:2 00:07:48.385 Error Recovery Timeout: Unlimited 00:07:48.385 Command Set Identifier: NVM (00h) 00:07:48.385 Deallocate: Supported 00:07:48.385 Deallocated/Unwritten Error: Supported 00:07:48.385 Deallocated Read Value: All 0x00 00:07:48.385 Deallocate in Write Zeroes: Not Supported 00:07:48.385 Deallocated Guard Field: 0xFFFF 00:07:48.385 Flush: Supported 00:07:48.385 Reservation: Not Supported 00:07:48.385 Namespace Sharing Capabilities: Private 00:07:48.385 Size (in LBAs): 1048576 (4GiB) 00:07:48.385 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.385 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.385 Thin Provisioning: Not Supported 00:07:48.385 Per-NS Atomic Units: No 00:07:48.385 Maximum Single Source Range Length: 128 00:07:48.385 Maximum Copy Length: 128 00:07:48.385 Maximum Source Range Count: 128 00:07:48.385 NGUID/EUI64 Never Reused: No 00:07:48.385 Namespace Write Protected: No 00:07:48.385 Number of LBA Formats: 8 00:07:48.385 Current LBA Format: LBA Format #04 00:07:48.385 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.385 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.385 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.385 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.385 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.385 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.385 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.385 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.385 00:07:48.385 NVM Specific Namespace Data 00:07:48.385 =========================== 00:07:48.385 Logical Block Storage Tag Mask: 0 00:07:48.385 Protection Information Capabilities: 00:07:48.385 16b Guard Protection Information Storage Tag Support: No 00:07:48.385 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.385 Storage Tag Check Read Support: No 00:07:48.385 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Namespace ID:3 00:07:48.385 Error Recovery Timeout: Unlimited 00:07:48.385 Command Set Identifier: NVM (00h) 00:07:48.385 Deallocate: Supported 00:07:48.385 Deallocated/Unwritten Error: Supported 00:07:48.385 Deallocated Read Value: All 0x00 00:07:48.385 Deallocate in Write Zeroes: Not Supported 00:07:48.385 Deallocated Guard Field: 0xFFFF 00:07:48.385 Flush: Supported 00:07:48.385 Reservation: Not Supported 00:07:48.385 Namespace Sharing Capabilities: Private 00:07:48.385 Size (in LBAs): 1048576 (4GiB) 00:07:48.385 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.385 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.385 Thin Provisioning: Not Supported 00:07:48.385 Per-NS Atomic Units: No 00:07:48.385 Maximum Single Source Range Length: 128 00:07:48.385 Maximum Copy Length: 128 00:07:48.385 Maximum Source Range Count: 128 00:07:48.385 NGUID/EUI64 Never Reused: No 00:07:48.385 Namespace Write Protected: No 00:07:48.385 Number of LBA Formats: 8 00:07:48.385 Current LBA Format: LBA Format #04 00:07:48.385 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.385 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.385 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.385 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.385 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.385 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.385 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.385 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.385 00:07:48.385 NVM Specific Namespace Data 00:07:48.385 =========================== 00:07:48.385 Logical Block Storage Tag Mask: 0 00:07:48.385 Protection Information Capabilities: 00:07:48.385 16b Guard Protection Information Storage Tag Support: No 00:07:48.385 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.385 Storage Tag Check Read Support: No 00:07:48.385 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.385 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.386 21:13:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.386 21:13:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:48.645 ===================================================== 00:07:48.645 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.645 ===================================================== 00:07:48.645 Controller Capabilities/Features 00:07:48.645 ================================ 00:07:48.645 Vendor ID: 1b36 00:07:48.645 Subsystem Vendor ID: 1af4 00:07:48.645 Serial Number: 12340 00:07:48.645 Model Number: QEMU NVMe Ctrl 00:07:48.645 Firmware Version: 8.0.0 00:07:48.645 Recommended Arb Burst: 6 00:07:48.645 IEEE OUI Identifier: 00 54 52 00:07:48.645 Multi-path I/O 00:07:48.645 May have multiple subsystem ports: No 00:07:48.645 May have multiple controllers: No 00:07:48.645 Associated with SR-IOV VF: No 00:07:48.645 Max Data Transfer Size: 524288 00:07:48.645 Max Number of Namespaces: 256 00:07:48.645 Max Number of I/O Queues: 64 00:07:48.645 NVMe Specification Version (VS): 1.4 00:07:48.645 NVMe Specification Version (Identify): 1.4 00:07:48.645 Maximum Queue Entries: 2048 00:07:48.645 Contiguous Queues Required: Yes 00:07:48.645 Arbitration Mechanisms Supported 00:07:48.645 Weighted Round Robin: Not Supported 00:07:48.645 Vendor Specific: Not Supported 00:07:48.645 Reset Timeout: 7500 ms 00:07:48.645 Doorbell Stride: 4 bytes 00:07:48.645 NVM Subsystem Reset: Not Supported 00:07:48.645 Command Sets Supported 00:07:48.645 NVM Command Set: Supported 00:07:48.645 Boot Partition: Not Supported 00:07:48.645 Memory Page Size Minimum: 4096 bytes 00:07:48.645 Memory Page Size Maximum: 65536 bytes 00:07:48.645 Persistent Memory Region: Not Supported 00:07:48.645 Optional Asynchronous Events Supported 00:07:48.645 Namespace Attribute Notices: Supported 00:07:48.645 Firmware Activation Notices: Not Supported 00:07:48.645 ANA Change Notices: Not Supported 00:07:48.645 PLE Aggregate Log Change Notices: Not Supported 00:07:48.645 LBA Status Info Alert Notices: Not Supported 00:07:48.645 EGE Aggregate Log Change Notices: Not Supported 00:07:48.645 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.645 Zone Descriptor Change Notices: Not Supported 00:07:48.645 Discovery Log Change Notices: Not Supported 00:07:48.645 Controller Attributes 00:07:48.645 128-bit Host Identifier: Not Supported 00:07:48.645 Non-Operational Permissive Mode: Not Supported 00:07:48.645 NVM Sets: Not Supported 00:07:48.645 Read Recovery Levels: Not Supported 00:07:48.645 Endurance Groups: Not Supported 00:07:48.645 Predictable Latency Mode: Not Supported 00:07:48.646 Traffic Based Keep ALive: Not Supported 00:07:48.646 Namespace Granularity: Not Supported 00:07:48.646 SQ Associations: Not Supported 00:07:48.646 UUID List: Not Supported 00:07:48.646 Multi-Domain Subsystem: Not Supported 00:07:48.646 Fixed Capacity Management: Not Supported 00:07:48.646 Variable Capacity Management: Not Supported 00:07:48.646 Delete Endurance Group: Not Supported 00:07:48.646 Delete NVM Set: Not Supported 00:07:48.646 Extended LBA Formats Supported: Supported 00:07:48.646 Flexible Data Placement Supported: Not Supported 00:07:48.646 00:07:48.646 Controller Memory Buffer Support 00:07:48.646 ================================ 00:07:48.646 Supported: No 00:07:48.646 00:07:48.646 Persistent Memory Region Support 00:07:48.646 ================================ 00:07:48.646 Supported: No 00:07:48.646 00:07:48.646 Admin Command Set Attributes 00:07:48.646 ============================ 00:07:48.646 Security Send/Receive: Not Supported 00:07:48.646 Format NVM: Supported 00:07:48.646 Firmware Activate/Download: Not Supported 00:07:48.646 Namespace Management: Supported 00:07:48.646 Device Self-Test: Not Supported 00:07:48.646 Directives: Supported 00:07:48.646 NVMe-MI: Not Supported 00:07:48.646 Virtualization Management: Not Supported 00:07:48.646 Doorbell Buffer Config: Supported 00:07:48.646 Get LBA Status Capability: Not Supported 00:07:48.646 Command & Feature Lockdown Capability: Not Supported 00:07:48.646 Abort Command Limit: 4 00:07:48.646 Async Event Request Limit: 4 00:07:48.646 Number of Firmware Slots: N/A 00:07:48.646 Firmware Slot 1 Read-Only: N/A 00:07:48.646 Firmware Activation Without Reset: N/A 00:07:48.646 Multiple Update Detection Support: N/A 00:07:48.646 Firmware Update Granularity: No Information Provided 00:07:48.646 Per-Namespace SMART Log: Yes 00:07:48.646 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.646 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:48.646 Command Effects Log Page: Supported 00:07:48.646 Get Log Page Extended Data: Supported 00:07:48.646 Telemetry Log Pages: Not Supported 00:07:48.646 Persistent Event Log Pages: Not Supported 00:07:48.646 Supported Log Pages Log Page: May Support 00:07:48.646 Commands Supported & Effects Log Page: Not Supported 00:07:48.646 Feature Identifiers & Effects Log Page:May Support 00:07:48.646 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.646 Data Area 4 for Telemetry Log: Not Supported 00:07:48.646 Error Log Page Entries Supported: 1 00:07:48.646 Keep Alive: Not Supported 00:07:48.646 00:07:48.646 NVM Command Set Attributes 00:07:48.646 ========================== 00:07:48.646 Submission Queue Entry Size 00:07:48.646 Max: 64 00:07:48.646 Min: 64 00:07:48.646 Completion Queue Entry Size 00:07:48.646 Max: 16 00:07:48.646 Min: 16 00:07:48.646 Number of Namespaces: 256 00:07:48.646 Compare Command: Supported 00:07:48.646 Write Uncorrectable Command: Not Supported 00:07:48.646 Dataset Management Command: Supported 00:07:48.646 Write Zeroes Command: Supported 00:07:48.646 Set Features Save Field: Supported 00:07:48.646 Reservations: Not Supported 00:07:48.646 Timestamp: Supported 00:07:48.646 Copy: Supported 00:07:48.646 Volatile Write Cache: Present 00:07:48.646 Atomic Write Unit (Normal): 1 00:07:48.646 Atomic Write Unit (PFail): 1 00:07:48.646 Atomic Compare & Write Unit: 1 00:07:48.646 Fused Compare & Write: Not Supported 00:07:48.646 Scatter-Gather List 00:07:48.646 SGL Command Set: Supported 00:07:48.646 SGL Keyed: Not Supported 00:07:48.646 SGL Bit Bucket Descriptor: Not Supported 00:07:48.646 SGL Metadata Pointer: Not Supported 00:07:48.646 Oversized SGL: Not Supported 00:07:48.646 SGL Metadata Address: Not Supported 00:07:48.646 SGL Offset: Not Supported 00:07:48.646 Transport SGL Data Block: Not Supported 00:07:48.646 Replay Protected Memory Block: Not Supported 00:07:48.646 00:07:48.646 Firmware Slot Information 00:07:48.646 ========================= 00:07:48.646 Active slot: 1 00:07:48.646 Slot 1 Firmware Revision: 1.0 00:07:48.646 00:07:48.646 00:07:48.646 Commands Supported and Effects 00:07:48.646 ============================== 00:07:48.646 Admin Commands 00:07:48.646 -------------- 00:07:48.646 Delete I/O Submission Queue (00h): Supported 00:07:48.646 Create I/O Submission Queue (01h): Supported 00:07:48.646 Get Log Page (02h): Supported 00:07:48.646 Delete I/O Completion Queue (04h): Supported 00:07:48.646 Create I/O Completion Queue (05h): Supported 00:07:48.646 Identify (06h): Supported 00:07:48.646 Abort (08h): Supported 00:07:48.646 Set Features (09h): Supported 00:07:48.646 Get Features (0Ah): Supported 00:07:48.646 Asynchronous Event Request (0Ch): Supported 00:07:48.646 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.646 Directive Send (19h): Supported 00:07:48.646 Directive Receive (1Ah): Supported 00:07:48.646 Virtualization Management (1Ch): Supported 00:07:48.646 Doorbell Buffer Config (7Ch): Supported 00:07:48.646 Format NVM (80h): Supported LBA-Change 00:07:48.646 I/O Commands 00:07:48.646 ------------ 00:07:48.646 Flush (00h): Supported LBA-Change 00:07:48.646 Write (01h): Supported LBA-Change 00:07:48.646 Read (02h): Supported 00:07:48.646 Compare (05h): Supported 00:07:48.646 Write Zeroes (08h): Supported LBA-Change 00:07:48.646 Dataset Management (09h): Supported LBA-Change 00:07:48.646 Unknown (0Ch): Supported 00:07:48.646 Unknown (12h): Supported 00:07:48.646 Copy (19h): Supported LBA-Change 00:07:48.646 Unknown (1Dh): Supported LBA-Change 00:07:48.646 00:07:48.646 Error Log 00:07:48.646 ========= 00:07:48.646 00:07:48.646 Arbitration 00:07:48.646 =========== 00:07:48.646 Arbitration Burst: no limit 00:07:48.646 00:07:48.646 Power Management 00:07:48.646 ================ 00:07:48.646 Number of Power States: 1 00:07:48.646 Current Power State: Power State #0 00:07:48.646 Power State #0: 00:07:48.646 Max Power: 25.00 W 00:07:48.646 Non-Operational State: Operational 00:07:48.646 Entry Latency: 16 microseconds 00:07:48.646 Exit Latency: 4 microseconds 00:07:48.646 Relative Read Throughput: 0 00:07:48.646 Relative Read Latency: 0 00:07:48.646 Relative Write Throughput: 0 00:07:48.646 Relative Write Latency: 0 00:07:48.646 Idle Power: Not Reported 00:07:48.646 Active Power: Not Reported 00:07:48.646 Non-Operational Permissive Mode: Not Supported 00:07:48.646 00:07:48.646 Health Information 00:07:48.646 ================== 00:07:48.646 Critical Warnings: 00:07:48.646 Available Spare Space: OK 00:07:48.646 Temperature: OK 00:07:48.646 Device Reliability: OK 00:07:48.646 Read Only: No 00:07:48.646 Volatile Memory Backup: OK 00:07:48.646 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.646 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.646 Available Spare: 0% 00:07:48.646 Available Spare Threshold: 0% 00:07:48.646 Life Percentage Used: 0% 00:07:48.646 Data Units Read: 715 00:07:48.646 Data Units Written: 643 00:07:48.646 Host Read Commands: 39688 00:07:48.646 Host Write Commands: 39474 00:07:48.646 Controller Busy Time: 0 minutes 00:07:48.646 Power Cycles: 0 00:07:48.646 Power On Hours: 0 hours 00:07:48.646 Unsafe Shutdowns: 0 00:07:48.646 Unrecoverable Media Errors: 0 00:07:48.646 Lifetime Error Log Entries: 0 00:07:48.646 Warning Temperature Time: 0 minutes 00:07:48.646 Critical Temperature Time: 0 minutes 00:07:48.646 00:07:48.646 Number of Queues 00:07:48.646 ================ 00:07:48.646 Number of I/O Submission Queues: 64 00:07:48.646 Number of I/O Completion Queues: 64 00:07:48.646 00:07:48.646 ZNS Specific Controller Data 00:07:48.646 ============================ 00:07:48.646 Zone Append Size Limit: 0 00:07:48.646 00:07:48.646 00:07:48.646 Active Namespaces 00:07:48.646 ================= 00:07:48.646 Namespace ID:1 00:07:48.646 Error Recovery Timeout: Unlimited 00:07:48.646 Command Set Identifier: NVM (00h) 00:07:48.646 Deallocate: Supported 00:07:48.646 Deallocated/Unwritten Error: Supported 00:07:48.646 Deallocated Read Value: All 0x00 00:07:48.646 Deallocate in Write Zeroes: Not Supported 00:07:48.646 Deallocated Guard Field: 0xFFFF 00:07:48.646 Flush: Supported 00:07:48.646 Reservation: Not Supported 00:07:48.646 Metadata Transferred as: Separate Metadata Buffer 00:07:48.646 Namespace Sharing Capabilities: Private 00:07:48.646 Size (in LBAs): 1548666 (5GiB) 00:07:48.646 Capacity (in LBAs): 1548666 (5GiB) 00:07:48.646 Utilization (in LBAs): 1548666 (5GiB) 00:07:48.646 Thin Provisioning: Not Supported 00:07:48.646 Per-NS Atomic Units: No 00:07:48.646 Maximum Single Source Range Length: 128 00:07:48.646 Maximum Copy Length: 128 00:07:48.646 Maximum Source Range Count: 128 00:07:48.646 NGUID/EUI64 Never Reused: No 00:07:48.646 Namespace Write Protected: No 00:07:48.646 Number of LBA Formats: 8 00:07:48.646 Current LBA Format: LBA Format #07 00:07:48.646 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.646 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.646 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.646 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.647 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.647 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.647 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.647 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.647 00:07:48.647 NVM Specific Namespace Data 00:07:48.647 =========================== 00:07:48.647 Logical Block Storage Tag Mask: 0 00:07:48.647 Protection Information Capabilities: 00:07:48.647 16b Guard Protection Information Storage Tag Support: No 00:07:48.647 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.647 Storage Tag Check Read Support: No 00:07:48.647 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.647 21:13:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.647 21:13:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:48.647 ===================================================== 00:07:48.647 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.647 ===================================================== 00:07:48.647 Controller Capabilities/Features 00:07:48.647 ================================ 00:07:48.647 Vendor ID: 1b36 00:07:48.647 Subsystem Vendor ID: 1af4 00:07:48.647 Serial Number: 12341 00:07:48.647 Model Number: QEMU NVMe Ctrl 00:07:48.647 Firmware Version: 8.0.0 00:07:48.647 Recommended Arb Burst: 6 00:07:48.647 IEEE OUI Identifier: 00 54 52 00:07:48.647 Multi-path I/O 00:07:48.647 May have multiple subsystem ports: No 00:07:48.647 May have multiple controllers: No 00:07:48.647 Associated with SR-IOV VF: No 00:07:48.647 Max Data Transfer Size: 524288 00:07:48.647 Max Number of Namespaces: 256 00:07:48.647 Max Number of I/O Queues: 64 00:07:48.647 NVMe Specification Version (VS): 1.4 00:07:48.647 NVMe Specification Version (Identify): 1.4 00:07:48.647 Maximum Queue Entries: 2048 00:07:48.647 Contiguous Queues Required: Yes 00:07:48.647 Arbitration Mechanisms Supported 00:07:48.647 Weighted Round Robin: Not Supported 00:07:48.647 Vendor Specific: Not Supported 00:07:48.647 Reset Timeout: 7500 ms 00:07:48.647 Doorbell Stride: 4 bytes 00:07:48.647 NVM Subsystem Reset: Not Supported 00:07:48.647 Command Sets Supported 00:07:48.647 NVM Command Set: Supported 00:07:48.647 Boot Partition: Not Supported 00:07:48.647 Memory Page Size Minimum: 4096 bytes 00:07:48.647 Memory Page Size Maximum: 65536 bytes 00:07:48.647 Persistent Memory Region: Not Supported 00:07:48.647 Optional Asynchronous Events Supported 00:07:48.647 Namespace Attribute Notices: Supported 00:07:48.647 Firmware Activation Notices: Not Supported 00:07:48.647 ANA Change Notices: Not Supported 00:07:48.647 PLE Aggregate Log Change Notices: Not Supported 00:07:48.647 LBA Status Info Alert Notices: Not Supported 00:07:48.647 EGE Aggregate Log Change Notices: Not Supported 00:07:48.647 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.647 Zone Descriptor Change Notices: Not Supported 00:07:48.647 Discovery Log Change Notices: Not Supported 00:07:48.647 Controller Attributes 00:07:48.647 128-bit Host Identifier: Not Supported 00:07:48.647 Non-Operational Permissive Mode: Not Supported 00:07:48.647 NVM Sets: Not Supported 00:07:48.647 Read Recovery Levels: Not Supported 00:07:48.647 Endurance Groups: Not Supported 00:07:48.647 Predictable Latency Mode: Not Supported 00:07:48.647 Traffic Based Keep ALive: Not Supported 00:07:48.647 Namespace Granularity: Not Supported 00:07:48.647 SQ Associations: Not Supported 00:07:48.647 UUID List: Not Supported 00:07:48.647 Multi-Domain Subsystem: Not Supported 00:07:48.647 Fixed Capacity Management: Not Supported 00:07:48.647 Variable Capacity Management: Not Supported 00:07:48.647 Delete Endurance Group: Not Supported 00:07:48.647 Delete NVM Set: Not Supported 00:07:48.647 Extended LBA Formats Supported: Supported 00:07:48.647 Flexible Data Placement Supported: Not Supported 00:07:48.647 00:07:48.647 Controller Memory Buffer Support 00:07:48.647 ================================ 00:07:48.647 Supported: No 00:07:48.647 00:07:48.647 Persistent Memory Region Support 00:07:48.647 ================================ 00:07:48.647 Supported: No 00:07:48.647 00:07:48.647 Admin Command Set Attributes 00:07:48.647 ============================ 00:07:48.647 Security Send/Receive: Not Supported 00:07:48.647 Format NVM: Supported 00:07:48.647 Firmware Activate/Download: Not Supported 00:07:48.647 Namespace Management: Supported 00:07:48.647 Device Self-Test: Not Supported 00:07:48.647 Directives: Supported 00:07:48.647 NVMe-MI: Not Supported 00:07:48.647 Virtualization Management: Not Supported 00:07:48.647 Doorbell Buffer Config: Supported 00:07:48.647 Get LBA Status Capability: Not Supported 00:07:48.647 Command & Feature Lockdown Capability: Not Supported 00:07:48.647 Abort Command Limit: 4 00:07:48.647 Async Event Request Limit: 4 00:07:48.647 Number of Firmware Slots: N/A 00:07:48.647 Firmware Slot 1 Read-Only: N/A 00:07:48.647 Firmware Activation Without Reset: N/A 00:07:48.647 Multiple Update Detection Support: N/A 00:07:48.647 Firmware Update Granularity: No Information Provided 00:07:48.647 Per-Namespace SMART Log: Yes 00:07:48.647 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.647 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:48.647 Command Effects Log Page: Supported 00:07:48.647 Get Log Page Extended Data: Supported 00:07:48.647 Telemetry Log Pages: Not Supported 00:07:48.647 Persistent Event Log Pages: Not Supported 00:07:48.647 Supported Log Pages Log Page: May Support 00:07:48.647 Commands Supported & Effects Log Page: Not Supported 00:07:48.647 Feature Identifiers & Effects Log Page:May Support 00:07:48.647 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.647 Data Area 4 for Telemetry Log: Not Supported 00:07:48.647 Error Log Page Entries Supported: 1 00:07:48.647 Keep Alive: Not Supported 00:07:48.647 00:07:48.647 NVM Command Set Attributes 00:07:48.647 ========================== 00:07:48.647 Submission Queue Entry Size 00:07:48.647 Max: 64 00:07:48.647 Min: 64 00:07:48.647 Completion Queue Entry Size 00:07:48.647 Max: 16 00:07:48.647 Min: 16 00:07:48.647 Number of Namespaces: 256 00:07:48.647 Compare Command: Supported 00:07:48.647 Write Uncorrectable Command: Not Supported 00:07:48.647 Dataset Management Command: Supported 00:07:48.647 Write Zeroes Command: Supported 00:07:48.647 Set Features Save Field: Supported 00:07:48.647 Reservations: Not Supported 00:07:48.647 Timestamp: Supported 00:07:48.647 Copy: Supported 00:07:48.647 Volatile Write Cache: Present 00:07:48.647 Atomic Write Unit (Normal): 1 00:07:48.647 Atomic Write Unit (PFail): 1 00:07:48.647 Atomic Compare & Write Unit: 1 00:07:48.647 Fused Compare & Write: Not Supported 00:07:48.647 Scatter-Gather List 00:07:48.647 SGL Command Set: Supported 00:07:48.647 SGL Keyed: Not Supported 00:07:48.647 SGL Bit Bucket Descriptor: Not Supported 00:07:48.647 SGL Metadata Pointer: Not Supported 00:07:48.647 Oversized SGL: Not Supported 00:07:48.647 SGL Metadata Address: Not Supported 00:07:48.647 SGL Offset: Not Supported 00:07:48.647 Transport SGL Data Block: Not Supported 00:07:48.647 Replay Protected Memory Block: Not Supported 00:07:48.647 00:07:48.647 Firmware Slot Information 00:07:48.647 ========================= 00:07:48.647 Active slot: 1 00:07:48.647 Slot 1 Firmware Revision: 1.0 00:07:48.647 00:07:48.647 00:07:48.647 Commands Supported and Effects 00:07:48.647 ============================== 00:07:48.647 Admin Commands 00:07:48.647 -------------- 00:07:48.647 Delete I/O Submission Queue (00h): Supported 00:07:48.647 Create I/O Submission Queue (01h): Supported 00:07:48.647 Get Log Page (02h): Supported 00:07:48.647 Delete I/O Completion Queue (04h): Supported 00:07:48.647 Create I/O Completion Queue (05h): Supported 00:07:48.647 Identify (06h): Supported 00:07:48.647 Abort (08h): Supported 00:07:48.647 Set Features (09h): Supported 00:07:48.647 Get Features (0Ah): Supported 00:07:48.647 Asynchronous Event Request (0Ch): Supported 00:07:48.647 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.647 Directive Send (19h): Supported 00:07:48.647 Directive Receive (1Ah): Supported 00:07:48.647 Virtualization Management (1Ch): Supported 00:07:48.647 Doorbell Buffer Config (7Ch): Supported 00:07:48.647 Format NVM (80h): Supported LBA-Change 00:07:48.647 I/O Commands 00:07:48.647 ------------ 00:07:48.647 Flush (00h): Supported LBA-Change 00:07:48.647 Write (01h): Supported LBA-Change 00:07:48.647 Read (02h): Supported 00:07:48.647 Compare (05h): Supported 00:07:48.647 Write Zeroes (08h): Supported LBA-Change 00:07:48.647 Dataset Management (09h): Supported LBA-Change 00:07:48.647 Unknown (0Ch): Supported 00:07:48.647 Unknown (12h): Supported 00:07:48.647 Copy (19h): Supported LBA-Change 00:07:48.648 Unknown (1Dh): Supported LBA-Change 00:07:48.648 00:07:48.648 Error Log 00:07:48.648 ========= 00:07:48.648 00:07:48.648 Arbitration 00:07:48.648 =========== 00:07:48.648 Arbitration Burst: no limit 00:07:48.648 00:07:48.648 Power Management 00:07:48.648 ================ 00:07:48.648 Number of Power States: 1 00:07:48.648 Current Power State: Power State #0 00:07:48.648 Power State #0: 00:07:48.648 Max Power: 25.00 W 00:07:48.648 Non-Operational State: Operational 00:07:48.648 Entry Latency: 16 microseconds 00:07:48.648 Exit Latency: 4 microseconds 00:07:48.648 Relative Read Throughput: 0 00:07:48.648 Relative Read Latency: 0 00:07:48.648 Relative Write Throughput: 0 00:07:48.648 Relative Write Latency: 0 00:07:48.907 Idle Power: Not Reported 00:07:48.907 Active Power: Not Reported 00:07:48.907 Non-Operational Permissive Mode: Not Supported 00:07:48.907 00:07:48.907 Health Information 00:07:48.907 ================== 00:07:48.907 Critical Warnings: 00:07:48.907 Available Spare Space: OK 00:07:48.907 Temperature: OK 00:07:48.907 Device Reliability: OK 00:07:48.907 Read Only: No 00:07:48.907 Volatile Memory Backup: OK 00:07:48.907 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.907 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.907 Available Spare: 0% 00:07:48.907 Available Spare Threshold: 0% 00:07:48.907 Life Percentage Used: 0% 00:07:48.907 Data Units Read: 1082 00:07:48.907 Data Units Written: 948 00:07:48.907 Host Read Commands: 60714 00:07:48.907 Host Write Commands: 59500 00:07:48.907 Controller Busy Time: 0 minutes 00:07:48.907 Power Cycles: 0 00:07:48.907 Power On Hours: 0 hours 00:07:48.907 Unsafe Shutdowns: 0 00:07:48.908 Unrecoverable Media Errors: 0 00:07:48.908 Lifetime Error Log Entries: 0 00:07:48.908 Warning Temperature Time: 0 minutes 00:07:48.908 Critical Temperature Time: 0 minutes 00:07:48.908 00:07:48.908 Number of Queues 00:07:48.908 ================ 00:07:48.908 Number of I/O Submission Queues: 64 00:07:48.908 Number of I/O Completion Queues: 64 00:07:48.908 00:07:48.908 ZNS Specific Controller Data 00:07:48.908 ============================ 00:07:48.908 Zone Append Size Limit: 0 00:07:48.908 00:07:48.908 00:07:48.908 Active Namespaces 00:07:48.908 ================= 00:07:48.908 Namespace ID:1 00:07:48.908 Error Recovery Timeout: Unlimited 00:07:48.908 Command Set Identifier: NVM (00h) 00:07:48.908 Deallocate: Supported 00:07:48.908 Deallocated/Unwritten Error: Supported 00:07:48.908 Deallocated Read Value: All 0x00 00:07:48.908 Deallocate in Write Zeroes: Not Supported 00:07:48.908 Deallocated Guard Field: 0xFFFF 00:07:48.908 Flush: Supported 00:07:48.908 Reservation: Not Supported 00:07:48.908 Namespace Sharing Capabilities: Private 00:07:48.908 Size (in LBAs): 1310720 (5GiB) 00:07:48.908 Capacity (in LBAs): 1310720 (5GiB) 00:07:48.908 Utilization (in LBAs): 1310720 (5GiB) 00:07:48.908 Thin Provisioning: Not Supported 00:07:48.908 Per-NS Atomic Units: No 00:07:48.908 Maximum Single Source Range Length: 128 00:07:48.908 Maximum Copy Length: 128 00:07:48.908 Maximum Source Range Count: 128 00:07:48.908 NGUID/EUI64 Never Reused: No 00:07:48.908 Namespace Write Protected: No 00:07:48.908 Number of LBA Formats: 8 00:07:48.908 Current LBA Format: LBA Format #04 00:07:48.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.908 00:07:48.908 NVM Specific Namespace Data 00:07:48.908 =========================== 00:07:48.908 Logical Block Storage Tag Mask: 0 00:07:48.908 Protection Information Capabilities: 00:07:48.908 16b Guard Protection Information Storage Tag Support: No 00:07:48.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.908 Storage Tag Check Read Support: No 00:07:48.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.908 21:13:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.908 21:13:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:48.908 ===================================================== 00:07:48.908 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.908 ===================================================== 00:07:48.908 Controller Capabilities/Features 00:07:48.908 ================================ 00:07:48.908 Vendor ID: 1b36 00:07:48.908 Subsystem Vendor ID: 1af4 00:07:48.908 Serial Number: 12342 00:07:48.908 Model Number: QEMU NVMe Ctrl 00:07:48.908 Firmware Version: 8.0.0 00:07:48.908 Recommended Arb Burst: 6 00:07:48.908 IEEE OUI Identifier: 00 54 52 00:07:48.908 Multi-path I/O 00:07:48.908 May have multiple subsystem ports: No 00:07:48.908 May have multiple controllers: No 00:07:48.908 Associated with SR-IOV VF: No 00:07:48.908 Max Data Transfer Size: 524288 00:07:48.908 Max Number of Namespaces: 256 00:07:48.908 Max Number of I/O Queues: 64 00:07:48.908 NVMe Specification Version (VS): 1.4 00:07:48.908 NVMe Specification Version (Identify): 1.4 00:07:48.908 Maximum Queue Entries: 2048 00:07:48.908 Contiguous Queues Required: Yes 00:07:48.908 Arbitration Mechanisms Supported 00:07:48.908 Weighted Round Robin: Not Supported 00:07:48.908 Vendor Specific: Not Supported 00:07:48.908 Reset Timeout: 7500 ms 00:07:48.908 Doorbell Stride: 4 bytes 00:07:48.908 NVM Subsystem Reset: Not Supported 00:07:48.908 Command Sets Supported 00:07:48.908 NVM Command Set: Supported 00:07:48.908 Boot Partition: Not Supported 00:07:48.908 Memory Page Size Minimum: 4096 bytes 00:07:48.908 Memory Page Size Maximum: 65536 bytes 00:07:48.908 Persistent Memory Region: Not Supported 00:07:48.908 Optional Asynchronous Events Supported 00:07:48.908 Namespace Attribute Notices: Supported 00:07:48.908 Firmware Activation Notices: Not Supported 00:07:48.908 ANA Change Notices: Not Supported 00:07:48.908 PLE Aggregate Log Change Notices: Not Supported 00:07:48.908 LBA Status Info Alert Notices: Not Supported 00:07:48.908 EGE Aggregate Log Change Notices: Not Supported 00:07:48.908 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.908 Zone Descriptor Change Notices: Not Supported 00:07:48.908 Discovery Log Change Notices: Not Supported 00:07:48.908 Controller Attributes 00:07:48.908 128-bit Host Identifier: Not Supported 00:07:48.908 Non-Operational Permissive Mode: Not Supported 00:07:48.908 NVM Sets: Not Supported 00:07:48.908 Read Recovery Levels: Not Supported 00:07:48.908 Endurance Groups: Not Supported 00:07:48.908 Predictable Latency Mode: Not Supported 00:07:48.908 Traffic Based Keep ALive: Not Supported 00:07:48.908 Namespace Granularity: Not Supported 00:07:48.908 SQ Associations: Not Supported 00:07:48.908 UUID List: Not Supported 00:07:48.908 Multi-Domain Subsystem: Not Supported 00:07:48.908 Fixed Capacity Management: Not Supported 00:07:48.908 Variable Capacity Management: Not Supported 00:07:48.908 Delete Endurance Group: Not Supported 00:07:48.908 Delete NVM Set: Not Supported 00:07:48.908 Extended LBA Formats Supported: Supported 00:07:48.908 Flexible Data Placement Supported: Not Supported 00:07:48.908 00:07:48.908 Controller Memory Buffer Support 00:07:48.908 ================================ 00:07:48.908 Supported: No 00:07:48.908 00:07:48.908 Persistent Memory Region Support 00:07:48.908 ================================ 00:07:48.908 Supported: No 00:07:48.908 00:07:48.908 Admin Command Set Attributes 00:07:48.908 ============================ 00:07:48.908 Security Send/Receive: Not Supported 00:07:48.908 Format NVM: Supported 00:07:48.908 Firmware Activate/Download: Not Supported 00:07:48.908 Namespace Management: Supported 00:07:48.908 Device Self-Test: Not Supported 00:07:48.908 Directives: Supported 00:07:48.908 NVMe-MI: Not Supported 00:07:48.908 Virtualization Management: Not Supported 00:07:48.908 Doorbell Buffer Config: Supported 00:07:48.908 Get LBA Status Capability: Not Supported 00:07:48.908 Command & Feature Lockdown Capability: Not Supported 00:07:48.908 Abort Command Limit: 4 00:07:48.908 Async Event Request Limit: 4 00:07:48.908 Number of Firmware Slots: N/A 00:07:48.908 Firmware Slot 1 Read-Only: N/A 00:07:48.908 Firmware Activation Without Reset: N/A 00:07:48.908 Multiple Update Detection Support: N/A 00:07:48.908 Firmware Update Granularity: No Information Provided 00:07:48.908 Per-Namespace SMART Log: Yes 00:07:48.908 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.908 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:48.908 Command Effects Log Page: Supported 00:07:48.908 Get Log Page Extended Data: Supported 00:07:48.908 Telemetry Log Pages: Not Supported 00:07:48.908 Persistent Event Log Pages: Not Supported 00:07:48.908 Supported Log Pages Log Page: May Support 00:07:48.908 Commands Supported & Effects Log Page: Not Supported 00:07:48.908 Feature Identifiers & Effects Log Page:May Support 00:07:48.908 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.908 Data Area 4 for Telemetry Log: Not Supported 00:07:48.908 Error Log Page Entries Supported: 1 00:07:48.908 Keep Alive: Not Supported 00:07:48.908 00:07:48.908 NVM Command Set Attributes 00:07:48.908 ========================== 00:07:48.908 Submission Queue Entry Size 00:07:48.908 Max: 64 00:07:48.908 Min: 64 00:07:48.908 Completion Queue Entry Size 00:07:48.908 Max: 16 00:07:48.908 Min: 16 00:07:48.908 Number of Namespaces: 256 00:07:48.908 Compare Command: Supported 00:07:48.908 Write Uncorrectable Command: Not Supported 00:07:48.908 Dataset Management Command: Supported 00:07:48.908 Write Zeroes Command: Supported 00:07:48.908 Set Features Save Field: Supported 00:07:48.908 Reservations: Not Supported 00:07:48.908 Timestamp: Supported 00:07:48.908 Copy: Supported 00:07:48.908 Volatile Write Cache: Present 00:07:48.908 Atomic Write Unit (Normal): 1 00:07:48.908 Atomic Write Unit (PFail): 1 00:07:48.908 Atomic Compare & Write Unit: 1 00:07:48.908 Fused Compare & Write: Not Supported 00:07:48.908 Scatter-Gather List 00:07:48.908 SGL Command Set: Supported 00:07:48.909 SGL Keyed: Not Supported 00:07:48.909 SGL Bit Bucket Descriptor: Not Supported 00:07:48.909 SGL Metadata Pointer: Not Supported 00:07:48.909 Oversized SGL: Not Supported 00:07:48.909 SGL Metadata Address: Not Supported 00:07:48.909 SGL Offset: Not Supported 00:07:48.909 Transport SGL Data Block: Not Supported 00:07:48.909 Replay Protected Memory Block: Not Supported 00:07:48.909 00:07:48.909 Firmware Slot Information 00:07:48.909 ========================= 00:07:48.909 Active slot: 1 00:07:48.909 Slot 1 Firmware Revision: 1.0 00:07:48.909 00:07:48.909 00:07:48.909 Commands Supported and Effects 00:07:48.909 ============================== 00:07:48.909 Admin Commands 00:07:48.909 -------------- 00:07:48.909 Delete I/O Submission Queue (00h): Supported 00:07:48.909 Create I/O Submission Queue (01h): Supported 00:07:48.909 Get Log Page (02h): Supported 00:07:48.909 Delete I/O Completion Queue (04h): Supported 00:07:48.909 Create I/O Completion Queue (05h): Supported 00:07:48.909 Identify (06h): Supported 00:07:48.909 Abort (08h): Supported 00:07:48.909 Set Features (09h): Supported 00:07:48.909 Get Features (0Ah): Supported 00:07:48.909 Asynchronous Event Request (0Ch): Supported 00:07:48.909 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.909 Directive Send (19h): Supported 00:07:48.909 Directive Receive (1Ah): Supported 00:07:48.909 Virtualization Management (1Ch): Supported 00:07:48.909 Doorbell Buffer Config (7Ch): Supported 00:07:48.909 Format NVM (80h): Supported LBA-Change 00:07:48.909 I/O Commands 00:07:48.909 ------------ 00:07:48.909 Flush (00h): Supported LBA-Change 00:07:48.909 Write (01h): Supported LBA-Change 00:07:48.909 Read (02h): Supported 00:07:48.909 Compare (05h): Supported 00:07:48.909 Write Zeroes (08h): Supported LBA-Change 00:07:48.909 Dataset Management (09h): Supported LBA-Change 00:07:48.909 Unknown (0Ch): Supported 00:07:48.909 Unknown (12h): Supported 00:07:48.909 Copy (19h): Supported LBA-Change 00:07:48.909 Unknown (1Dh): Supported LBA-Change 00:07:48.909 00:07:48.909 Error Log 00:07:48.909 ========= 00:07:48.909 00:07:48.909 Arbitration 00:07:48.909 =========== 00:07:48.909 Arbitration Burst: no limit 00:07:48.909 00:07:48.909 Power Management 00:07:48.909 ================ 00:07:48.909 Number of Power States: 1 00:07:48.909 Current Power State: Power State #0 00:07:48.909 Power State #0: 00:07:48.909 Max Power: 25.00 W 00:07:48.909 Non-Operational State: Operational 00:07:48.909 Entry Latency: 16 microseconds 00:07:48.909 Exit Latency: 4 microseconds 00:07:48.909 Relative Read Throughput: 0 00:07:48.909 Relative Read Latency: 0 00:07:48.909 Relative Write Throughput: 0 00:07:48.909 Relative Write Latency: 0 00:07:48.909 Idle Power: Not Reported 00:07:48.909 Active Power: Not Reported 00:07:48.909 Non-Operational Permissive Mode: Not Supported 00:07:48.909 00:07:48.909 Health Information 00:07:48.909 ================== 00:07:48.909 Critical Warnings: 00:07:48.909 Available Spare Space: OK 00:07:48.909 Temperature: OK 00:07:48.909 Device Reliability: OK 00:07:48.909 Read Only: No 00:07:48.909 Volatile Memory Backup: OK 00:07:48.909 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.909 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.909 Available Spare: 0% 00:07:48.909 Available Spare Threshold: 0% 00:07:48.909 Life Percentage Used: 0% 00:07:48.909 Data Units Read: 2328 00:07:48.909 Data Units Written: 2115 00:07:48.909 Host Read Commands: 121566 00:07:48.909 Host Write Commands: 119835 00:07:48.909 Controller Busy Time: 0 minutes 00:07:48.909 Power Cycles: 0 00:07:48.909 Power On Hours: 0 hours 00:07:48.909 Unsafe Shutdowns: 0 00:07:48.909 Unrecoverable Media Errors: 0 00:07:48.909 Lifetime Error Log Entries: 0 00:07:48.909 Warning Temperature Time: 0 minutes 00:07:48.909 Critical Temperature Time: 0 minutes 00:07:48.909 00:07:48.909 Number of Queues 00:07:48.909 ================ 00:07:48.909 Number of I/O Submission Queues: 64 00:07:48.909 Number of I/O Completion Queues: 64 00:07:48.909 00:07:48.909 ZNS Specific Controller Data 00:07:48.909 ============================ 00:07:48.909 Zone Append Size Limit: 0 00:07:48.909 00:07:48.909 00:07:48.909 Active Namespaces 00:07:48.909 ================= 00:07:48.909 Namespace ID:1 00:07:48.909 Error Recovery Timeout: Unlimited 00:07:48.909 Command Set Identifier: NVM (00h) 00:07:48.909 Deallocate: Supported 00:07:48.909 Deallocated/Unwritten Error: Supported 00:07:48.909 Deallocated Read Value: All 0x00 00:07:48.909 Deallocate in Write Zeroes: Not Supported 00:07:48.909 Deallocated Guard Field: 0xFFFF 00:07:48.909 Flush: Supported 00:07:48.909 Reservation: Not Supported 00:07:48.909 Namespace Sharing Capabilities: Private 00:07:48.909 Size (in LBAs): 1048576 (4GiB) 00:07:48.909 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.909 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.909 Thin Provisioning: Not Supported 00:07:48.909 Per-NS Atomic Units: No 00:07:48.909 Maximum Single Source Range Length: 128 00:07:48.909 Maximum Copy Length: 128 00:07:48.909 Maximum Source Range Count: 128 00:07:48.909 NGUID/EUI64 Never Reused: No 00:07:48.909 Namespace Write Protected: No 00:07:48.909 Number of LBA Formats: 8 00:07:48.909 Current LBA Format: LBA Format #04 00:07:48.909 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.909 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.909 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.909 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.909 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.909 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.909 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.909 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.909 00:07:48.909 NVM Specific Namespace Data 00:07:48.909 =========================== 00:07:48.909 Logical Block Storage Tag Mask: 0 00:07:48.909 Protection Information Capabilities: 00:07:48.909 16b Guard Protection Information Storage Tag Support: No 00:07:48.909 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.909 Storage Tag Check Read Support: No 00:07:48.909 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.909 Namespace ID:2 00:07:48.909 Error Recovery Timeout: Unlimited 00:07:48.909 Command Set Identifier: NVM (00h) 00:07:48.909 Deallocate: Supported 00:07:48.909 Deallocated/Unwritten Error: Supported 00:07:48.909 Deallocated Read Value: All 0x00 00:07:48.909 Deallocate in Write Zeroes: Not Supported 00:07:48.909 Deallocated Guard Field: 0xFFFF 00:07:48.909 Flush: Supported 00:07:48.909 Reservation: Not Supported 00:07:48.909 Namespace Sharing Capabilities: Private 00:07:48.909 Size (in LBAs): 1048576 (4GiB) 00:07:48.909 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.909 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.909 Thin Provisioning: Not Supported 00:07:48.909 Per-NS Atomic Units: No 00:07:48.909 Maximum Single Source Range Length: 128 00:07:48.909 Maximum Copy Length: 128 00:07:48.909 Maximum Source Range Count: 128 00:07:48.909 NGUID/EUI64 Never Reused: No 00:07:48.909 Namespace Write Protected: No 00:07:48.909 Number of LBA Formats: 8 00:07:48.909 Current LBA Format: LBA Format #04 00:07:48.909 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.909 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.909 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.909 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.909 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.909 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.909 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.909 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.909 00:07:48.909 NVM Specific Namespace Data 00:07:48.909 =========================== 00:07:48.909 Logical Block Storage Tag Mask: 0 00:07:48.910 Protection Information Capabilities: 00:07:48.910 16b Guard Protection Information Storage Tag Support: No 00:07:48.910 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.910 Storage Tag Check Read Support: No 00:07:48.910 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Namespace ID:3 00:07:48.910 Error Recovery Timeout: Unlimited 00:07:48.910 Command Set Identifier: NVM (00h) 00:07:48.910 Deallocate: Supported 00:07:48.910 Deallocated/Unwritten Error: Supported 00:07:48.910 Deallocated Read Value: All 0x00 00:07:48.910 Deallocate in Write Zeroes: Not Supported 00:07:48.910 Deallocated Guard Field: 0xFFFF 00:07:48.910 Flush: Supported 00:07:48.910 Reservation: Not Supported 00:07:48.910 Namespace Sharing Capabilities: Private 00:07:48.910 Size (in LBAs): 1048576 (4GiB) 00:07:48.910 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.910 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.910 Thin Provisioning: Not Supported 00:07:48.910 Per-NS Atomic Units: No 00:07:48.910 Maximum Single Source Range Length: 128 00:07:48.910 Maximum Copy Length: 128 00:07:48.910 Maximum Source Range Count: 128 00:07:48.910 NGUID/EUI64 Never Reused: No 00:07:48.910 Namespace Write Protected: No 00:07:48.910 Number of LBA Formats: 8 00:07:48.910 Current LBA Format: LBA Format #04 00:07:48.910 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.910 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.910 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.910 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.910 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.910 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.910 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.910 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.910 00:07:48.910 NVM Specific Namespace Data 00:07:48.910 =========================== 00:07:48.910 Logical Block Storage Tag Mask: 0 00:07:48.910 Protection Information Capabilities: 00:07:48.910 16b Guard Protection Information Storage Tag Support: No 00:07:48.910 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.910 Storage Tag Check Read Support: No 00:07:48.910 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.910 21:13:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.910 21:13:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:49.169 ===================================================== 00:07:49.169 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:49.169 ===================================================== 00:07:49.169 Controller Capabilities/Features 00:07:49.169 ================================ 00:07:49.169 Vendor ID: 1b36 00:07:49.169 Subsystem Vendor ID: 1af4 00:07:49.169 Serial Number: 12343 00:07:49.169 Model Number: QEMU NVMe Ctrl 00:07:49.169 Firmware Version: 8.0.0 00:07:49.169 Recommended Arb Burst: 6 00:07:49.169 IEEE OUI Identifier: 00 54 52 00:07:49.169 Multi-path I/O 00:07:49.169 May have multiple subsystem ports: No 00:07:49.169 May have multiple controllers: Yes 00:07:49.169 Associated with SR-IOV VF: No 00:07:49.169 Max Data Transfer Size: 524288 00:07:49.169 Max Number of Namespaces: 256 00:07:49.169 Max Number of I/O Queues: 64 00:07:49.169 NVMe Specification Version (VS): 1.4 00:07:49.169 NVMe Specification Version (Identify): 1.4 00:07:49.169 Maximum Queue Entries: 2048 00:07:49.169 Contiguous Queues Required: Yes 00:07:49.169 Arbitration Mechanisms Supported 00:07:49.169 Weighted Round Robin: Not Supported 00:07:49.169 Vendor Specific: Not Supported 00:07:49.169 Reset Timeout: 7500 ms 00:07:49.169 Doorbell Stride: 4 bytes 00:07:49.169 NVM Subsystem Reset: Not Supported 00:07:49.169 Command Sets Supported 00:07:49.169 NVM Command Set: Supported 00:07:49.169 Boot Partition: Not Supported 00:07:49.169 Memory Page Size Minimum: 4096 bytes 00:07:49.169 Memory Page Size Maximum: 65536 bytes 00:07:49.169 Persistent Memory Region: Not Supported 00:07:49.169 Optional Asynchronous Events Supported 00:07:49.169 Namespace Attribute Notices: Supported 00:07:49.169 Firmware Activation Notices: Not Supported 00:07:49.169 ANA Change Notices: Not Supported 00:07:49.169 PLE Aggregate Log Change Notices: Not Supported 00:07:49.169 LBA Status Info Alert Notices: Not Supported 00:07:49.169 EGE Aggregate Log Change Notices: Not Supported 00:07:49.170 Normal NVM Subsystem Shutdown event: Not Supported 00:07:49.170 Zone Descriptor Change Notices: Not Supported 00:07:49.170 Discovery Log Change Notices: Not Supported 00:07:49.170 Controller Attributes 00:07:49.170 128-bit Host Identifier: Not Supported 00:07:49.170 Non-Operational Permissive Mode: Not Supported 00:07:49.170 NVM Sets: Not Supported 00:07:49.170 Read Recovery Levels: Not Supported 00:07:49.170 Endurance Groups: Supported 00:07:49.170 Predictable Latency Mode: Not Supported 00:07:49.170 Traffic Based Keep ALive: Not Supported 00:07:49.170 Namespace Granularity: Not Supported 00:07:49.170 SQ Associations: Not Supported 00:07:49.170 UUID List: Not Supported 00:07:49.170 Multi-Domain Subsystem: Not Supported 00:07:49.170 Fixed Capacity Management: Not Supported 00:07:49.170 Variable Capacity Management: Not Supported 00:07:49.170 Delete Endurance Group: Not Supported 00:07:49.170 Delete NVM Set: Not Supported 00:07:49.170 Extended LBA Formats Supported: Supported 00:07:49.170 Flexible Data Placement Supported: Supported 00:07:49.170 00:07:49.170 Controller Memory Buffer Support 00:07:49.170 ================================ 00:07:49.170 Supported: No 00:07:49.170 00:07:49.170 Persistent Memory Region Support 00:07:49.170 ================================ 00:07:49.170 Supported: No 00:07:49.170 00:07:49.170 Admin Command Set Attributes 00:07:49.170 ============================ 00:07:49.170 Security Send/Receive: Not Supported 00:07:49.170 Format NVM: Supported 00:07:49.170 Firmware Activate/Download: Not Supported 00:07:49.170 Namespace Management: Supported 00:07:49.170 Device Self-Test: Not Supported 00:07:49.170 Directives: Supported 00:07:49.170 NVMe-MI: Not Supported 00:07:49.170 Virtualization Management: Not Supported 00:07:49.170 Doorbell Buffer Config: Supported 00:07:49.170 Get LBA Status Capability: Not Supported 00:07:49.170 Command & Feature Lockdown Capability: Not Supported 00:07:49.170 Abort Command Limit: 4 00:07:49.170 Async Event Request Limit: 4 00:07:49.170 Number of Firmware Slots: N/A 00:07:49.170 Firmware Slot 1 Read-Only: N/A 00:07:49.170 Firmware Activation Without Reset: N/A 00:07:49.170 Multiple Update Detection Support: N/A 00:07:49.170 Firmware Update Granularity: No Information Provided 00:07:49.170 Per-Namespace SMART Log: Yes 00:07:49.170 Asymmetric Namespace Access Log Page: Not Supported 00:07:49.170 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:49.170 Command Effects Log Page: Supported 00:07:49.170 Get Log Page Extended Data: Supported 00:07:49.170 Telemetry Log Pages: Not Supported 00:07:49.170 Persistent Event Log Pages: Not Supported 00:07:49.170 Supported Log Pages Log Page: May Support 00:07:49.170 Commands Supported & Effects Log Page: Not Supported 00:07:49.170 Feature Identifiers & Effects Log Page:May Support 00:07:49.170 NVMe-MI Commands & Effects Log Page: May Support 00:07:49.170 Data Area 4 for Telemetry Log: Not Supported 00:07:49.170 Error Log Page Entries Supported: 1 00:07:49.170 Keep Alive: Not Supported 00:07:49.170 00:07:49.170 NVM Command Set Attributes 00:07:49.170 ========================== 00:07:49.170 Submission Queue Entry Size 00:07:49.170 Max: 64 00:07:49.170 Min: 64 00:07:49.170 Completion Queue Entry Size 00:07:49.170 Max: 16 00:07:49.170 Min: 16 00:07:49.170 Number of Namespaces: 256 00:07:49.170 Compare Command: Supported 00:07:49.170 Write Uncorrectable Command: Not Supported 00:07:49.170 Dataset Management Command: Supported 00:07:49.170 Write Zeroes Command: Supported 00:07:49.170 Set Features Save Field: Supported 00:07:49.170 Reservations: Not Supported 00:07:49.170 Timestamp: Supported 00:07:49.170 Copy: Supported 00:07:49.170 Volatile Write Cache: Present 00:07:49.170 Atomic Write Unit (Normal): 1 00:07:49.170 Atomic Write Unit (PFail): 1 00:07:49.170 Atomic Compare & Write Unit: 1 00:07:49.170 Fused Compare & Write: Not Supported 00:07:49.170 Scatter-Gather List 00:07:49.170 SGL Command Set: Supported 00:07:49.170 SGL Keyed: Not Supported 00:07:49.170 SGL Bit Bucket Descriptor: Not Supported 00:07:49.170 SGL Metadata Pointer: Not Supported 00:07:49.170 Oversized SGL: Not Supported 00:07:49.170 SGL Metadata Address: Not Supported 00:07:49.170 SGL Offset: Not Supported 00:07:49.170 Transport SGL Data Block: Not Supported 00:07:49.170 Replay Protected Memory Block: Not Supported 00:07:49.170 00:07:49.170 Firmware Slot Information 00:07:49.170 ========================= 00:07:49.170 Active slot: 1 00:07:49.170 Slot 1 Firmware Revision: 1.0 00:07:49.170 00:07:49.170 00:07:49.170 Commands Supported and Effects 00:07:49.170 ============================== 00:07:49.170 Admin Commands 00:07:49.170 -------------- 00:07:49.170 Delete I/O Submission Queue (00h): Supported 00:07:49.170 Create I/O Submission Queue (01h): Supported 00:07:49.170 Get Log Page (02h): Supported 00:07:49.170 Delete I/O Completion Queue (04h): Supported 00:07:49.170 Create I/O Completion Queue (05h): Supported 00:07:49.170 Identify (06h): Supported 00:07:49.170 Abort (08h): Supported 00:07:49.170 Set Features (09h): Supported 00:07:49.170 Get Features (0Ah): Supported 00:07:49.170 Asynchronous Event Request (0Ch): Supported 00:07:49.170 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:49.170 Directive Send (19h): Supported 00:07:49.170 Directive Receive (1Ah): Supported 00:07:49.170 Virtualization Management (1Ch): Supported 00:07:49.170 Doorbell Buffer Config (7Ch): Supported 00:07:49.170 Format NVM (80h): Supported LBA-Change 00:07:49.170 I/O Commands 00:07:49.170 ------------ 00:07:49.170 Flush (00h): Supported LBA-Change 00:07:49.170 Write (01h): Supported LBA-Change 00:07:49.170 Read (02h): Supported 00:07:49.170 Compare (05h): Supported 00:07:49.170 Write Zeroes (08h): Supported LBA-Change 00:07:49.170 Dataset Management (09h): Supported LBA-Change 00:07:49.170 Unknown (0Ch): Supported 00:07:49.170 Unknown (12h): Supported 00:07:49.170 Copy (19h): Supported LBA-Change 00:07:49.170 Unknown (1Dh): Supported LBA-Change 00:07:49.170 00:07:49.170 Error Log 00:07:49.170 ========= 00:07:49.170 00:07:49.170 Arbitration 00:07:49.170 =========== 00:07:49.170 Arbitration Burst: no limit 00:07:49.170 00:07:49.170 Power Management 00:07:49.170 ================ 00:07:49.170 Number of Power States: 1 00:07:49.170 Current Power State: Power State #0 00:07:49.170 Power State #0: 00:07:49.170 Max Power: 25.00 W 00:07:49.170 Non-Operational State: Operational 00:07:49.170 Entry Latency: 16 microseconds 00:07:49.170 Exit Latency: 4 microseconds 00:07:49.170 Relative Read Throughput: 0 00:07:49.170 Relative Read Latency: 0 00:07:49.170 Relative Write Throughput: 0 00:07:49.170 Relative Write Latency: 0 00:07:49.170 Idle Power: Not Reported 00:07:49.170 Active Power: Not Reported 00:07:49.170 Non-Operational Permissive Mode: Not Supported 00:07:49.170 00:07:49.170 Health Information 00:07:49.170 ================== 00:07:49.170 Critical Warnings: 00:07:49.170 Available Spare Space: OK 00:07:49.170 Temperature: OK 00:07:49.170 Device Reliability: OK 00:07:49.170 Read Only: No 00:07:49.170 Volatile Memory Backup: OK 00:07:49.170 Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.170 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:49.170 Available Spare: 0% 00:07:49.170 Available Spare Threshold: 0% 00:07:49.170 Life Percentage Used: 0% 00:07:49.170 Data Units Read: 925 00:07:49.170 Data Units Written: 854 00:07:49.170 Host Read Commands: 41862 00:07:49.170 Host Write Commands: 41285 00:07:49.170 Controller Busy Time: 0 minutes 00:07:49.170 Power Cycles: 0 00:07:49.170 Power On Hours: 0 hours 00:07:49.170 Unsafe Shutdowns: 0 00:07:49.170 Unrecoverable Media Errors: 0 00:07:49.170 Lifetime Error Log Entries: 0 00:07:49.170 Warning Temperature Time: 0 minutes 00:07:49.170 Critical Temperature Time: 0 minutes 00:07:49.170 00:07:49.170 Number of Queues 00:07:49.170 ================ 00:07:49.170 Number of I/O Submission Queues: 64 00:07:49.170 Number of I/O Completion Queues: 64 00:07:49.170 00:07:49.170 ZNS Specific Controller Data 00:07:49.170 ============================ 00:07:49.170 Zone Append Size Limit: 0 00:07:49.170 00:07:49.170 00:07:49.170 Active Namespaces 00:07:49.170 ================= 00:07:49.170 Namespace ID:1 00:07:49.170 Error Recovery Timeout: Unlimited 00:07:49.170 Command Set Identifier: NVM (00h) 00:07:49.170 Deallocate: Supported 00:07:49.170 Deallocated/Unwritten Error: Supported 00:07:49.170 Deallocated Read Value: All 0x00 00:07:49.170 Deallocate in Write Zeroes: Not Supported 00:07:49.170 Deallocated Guard Field: 0xFFFF 00:07:49.170 Flush: Supported 00:07:49.170 Reservation: Not Supported 00:07:49.170 Namespace Sharing Capabilities: Multiple Controllers 00:07:49.170 Size (in LBAs): 262144 (1GiB) 00:07:49.170 Capacity (in LBAs): 262144 (1GiB) 00:07:49.170 Utilization (in LBAs): 262144 (1GiB) 00:07:49.170 Thin Provisioning: Not Supported 00:07:49.170 Per-NS Atomic Units: No 00:07:49.170 Maximum Single Source Range Length: 128 00:07:49.170 Maximum Copy Length: 128 00:07:49.170 Maximum Source Range Count: 128 00:07:49.171 NGUID/EUI64 Never Reused: No 00:07:49.171 Namespace Write Protected: No 00:07:49.171 Endurance group ID: 1 00:07:49.171 Number of LBA Formats: 8 00:07:49.171 Current LBA Format: LBA Format #04 00:07:49.171 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.171 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.171 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.171 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.171 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.171 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.171 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.171 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.171 00:07:49.171 Get Feature FDP: 00:07:49.171 ================ 00:07:49.171 Enabled: Yes 00:07:49.171 FDP configuration index: 0 00:07:49.171 00:07:49.171 FDP configurations log page 00:07:49.171 =========================== 00:07:49.171 Number of FDP configurations: 1 00:07:49.171 Version: 0 00:07:49.171 Size: 112 00:07:49.171 FDP Configuration Descriptor: 0 00:07:49.171 Descriptor Size: 96 00:07:49.171 Reclaim Group Identifier format: 2 00:07:49.171 FDP Volatile Write Cache: Not Present 00:07:49.171 FDP Configuration: Valid 00:07:49.171 Vendor Specific Size: 0 00:07:49.171 Number of Reclaim Groups: 2 00:07:49.171 Number of Recalim Unit Handles: 8 00:07:49.171 Max Placement Identifiers: 128 00:07:49.171 Number of Namespaces Suppprted: 256 00:07:49.171 Reclaim unit Nominal Size: 6000000 bytes 00:07:49.171 Estimated Reclaim Unit Time Limit: Not Reported 00:07:49.171 RUH Desc #000: RUH Type: Initially Isolated 00:07:49.171 RUH Desc #001: RUH Type: Initially Isolated 00:07:49.171 RUH Desc #002: RUH Type: Initially Isolated 00:07:49.171 RUH Desc #003: RUH Type: Initially Isolated 00:07:49.171 RUH Desc #004: RUH Type: Initially Isolated 00:07:49.171 RUH Desc #005: RUH Type: Initially Isolated 00:07:49.171 RUH Desc #006: RUH Type: Initially Isolated 00:07:49.171 RUH Desc #007: RUH Type: Initially Isolated 00:07:49.171 00:07:49.171 FDP reclaim unit handle usage log page 00:07:49.171 ====================================== 00:07:49.171 Number of Reclaim Unit Handles: 8 00:07:49.171 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:49.171 RUH Usage Desc #001: RUH Attributes: Unused 00:07:49.171 RUH Usage Desc #002: RUH Attributes: Unused 00:07:49.171 RUH Usage Desc #003: RUH Attributes: Unused 00:07:49.171 RUH Usage Desc #004: RUH Attributes: Unused 00:07:49.171 RUH Usage Desc #005: RUH Attributes: Unused 00:07:49.171 RUH Usage Desc #006: RUH Attributes: Unused 00:07:49.171 RUH Usage Desc #007: RUH Attributes: Unused 00:07:49.171 00:07:49.171 FDP statistics log page 00:07:49.171 ======================= 00:07:49.171 Host bytes with metadata written: 494051328 00:07:49.171 Media bytes with metadata written: 494104576 00:07:49.171 Media bytes erased: 0 00:07:49.171 00:07:49.171 FDP events log page 00:07:49.171 =================== 00:07:49.171 Number of FDP events: 0 00:07:49.171 00:07:49.171 NVM Specific Namespace Data 00:07:49.171 =========================== 00:07:49.171 Logical Block Storage Tag Mask: 0 00:07:49.171 Protection Information Capabilities: 00:07:49.171 16b Guard Protection Information Storage Tag Support: No 00:07:49.171 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.171 Storage Tag Check Read Support: No 00:07:49.171 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.171 00:07:49.171 real 0m1.008s 00:07:49.171 user 0m0.393s 00:07:49.171 sys 0m0.406s 00:07:49.171 ************************************ 00:07:49.171 END TEST nvme_identify 00:07:49.171 ************************************ 00:07:49.171 21:13:38 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.171 21:13:38 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:49.171 21:13:38 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:49.171 21:13:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.171 21:13:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.171 21:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.171 ************************************ 00:07:49.171 START TEST nvme_perf 00:07:49.171 ************************************ 00:07:49.171 21:13:38 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:49.171 21:13:38 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:50.548 Initializing NVMe Controllers 00:07:50.548 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.548 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.548 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.548 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.548 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:50.548 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:50.548 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:50.548 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:50.548 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:50.548 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:50.548 Initialization complete. Launching workers. 00:07:50.548 ======================================================== 00:07:50.548 Latency(us) 00:07:50.548 Device Information : IOPS MiB/s Average min max 00:07:50.548 PCIE (0000:00:10.0) NSID 1 from core 0: 17612.27 206.39 7269.13 4991.98 24538.23 00:07:50.548 PCIE (0000:00:11.0) NSID 1 from core 0: 17612.27 206.39 7264.14 4810.14 24749.62 00:07:50.548 PCIE (0000:00:13.0) NSID 1 from core 0: 17612.27 206.39 7258.08 4044.70 24961.51 00:07:50.548 PCIE (0000:00:12.0) NSID 1 from core 0: 17612.27 206.39 7252.03 3858.05 25148.84 00:07:50.548 PCIE (0000:00:12.0) NSID 2 from core 0: 17612.27 206.39 7245.91 3554.15 25295.19 00:07:50.548 PCIE (0000:00:12.0) NSID 3 from core 0: 17612.27 206.39 7239.93 3211.36 25146.42 00:07:50.548 ======================================================== 00:07:50.548 Total : 105673.64 1238.36 7254.87 3211.36 25295.19 00:07:50.548 00:07:50.548 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.548 ================================================================================= 00:07:50.548 1.00000% : 5999.065us 00:07:50.548 10.00000% : 6326.745us 00:07:50.548 25.00000% : 6604.012us 00:07:50.548 50.00000% : 6956.898us 00:07:50.548 75.00000% : 7309.785us 00:07:50.548 90.00000% : 8015.557us 00:07:50.548 95.00000% : 10384.935us 00:07:50.548 98.00000% : 12351.015us 00:07:50.548 99.00000% : 14216.271us 00:07:50.548 99.50000% : 15829.465us 00:07:50.548 99.90000% : 24197.908us 00:07:50.548 99.99000% : 24601.206us 00:07:50.548 99.99900% : 24601.206us 00:07:50.548 99.99990% : 24601.206us 00:07:50.548 99.99999% : 24601.206us 00:07:50.548 00:07:50.548 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.548 ================================================================================= 00:07:50.548 1.00000% : 6074.683us 00:07:50.548 10.00000% : 6377.157us 00:07:50.548 25.00000% : 6604.012us 00:07:50.548 50.00000% : 6906.486us 00:07:50.548 75.00000% : 7309.785us 00:07:50.548 90.00000% : 7965.145us 00:07:50.548 95.00000% : 10435.348us 00:07:50.548 98.00000% : 12451.840us 00:07:50.548 99.00000% : 13712.148us 00:07:50.548 99.50000% : 16232.763us 00:07:50.548 99.90000% : 24298.732us 00:07:50.548 99.99000% : 24802.855us 00:07:50.548 99.99900% : 24802.855us 00:07:50.548 99.99990% : 24802.855us 00:07:50.548 99.99999% : 24802.855us 00:07:50.548 00:07:50.548 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.548 ================================================================================= 00:07:50.548 1.00000% : 6049.477us 00:07:50.548 10.00000% : 6377.157us 00:07:50.548 25.00000% : 6604.012us 00:07:50.548 50.00000% : 6906.486us 00:07:50.548 75.00000% : 7259.372us 00:07:50.548 90.00000% : 7864.320us 00:07:50.548 95.00000% : 10435.348us 00:07:50.548 98.00000% : 12502.252us 00:07:50.548 99.00000% : 13510.498us 00:07:50.548 99.50000% : 16736.886us 00:07:50.548 99.90000% : 24500.382us 00:07:50.548 99.99000% : 25004.505us 00:07:50.549 99.99900% : 25004.505us 00:07:50.549 99.99990% : 25004.505us 00:07:50.549 99.99999% : 25004.505us 00:07:50.549 00:07:50.549 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.549 ================================================================================= 00:07:50.549 1.00000% : 6074.683us 00:07:50.549 10.00000% : 6377.157us 00:07:50.549 25.00000% : 6604.012us 00:07:50.549 50.00000% : 6906.486us 00:07:50.549 75.00000% : 7259.372us 00:07:50.549 90.00000% : 7813.908us 00:07:50.549 95.00000% : 10384.935us 00:07:50.549 98.00000% : 12351.015us 00:07:50.549 99.00000% : 13812.972us 00:07:50.549 99.50000% : 16535.237us 00:07:50.549 99.90000% : 24702.031us 00:07:50.549 99.99000% : 25206.154us 00:07:50.549 99.99900% : 25206.154us 00:07:50.549 99.99990% : 25206.154us 00:07:50.549 99.99999% : 25206.154us 00:07:50.549 00:07:50.549 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.549 ================================================================================= 00:07:50.549 1.00000% : 6099.889us 00:07:50.549 10.00000% : 6377.157us 00:07:50.549 25.00000% : 6604.012us 00:07:50.549 50.00000% : 6906.486us 00:07:50.549 75.00000% : 7259.372us 00:07:50.549 90.00000% : 7813.908us 00:07:50.549 95.00000% : 10284.111us 00:07:50.549 98.00000% : 12199.778us 00:07:50.549 99.00000% : 13812.972us 00:07:50.549 99.50000% : 16031.114us 00:07:50.549 99.90000% : 24903.680us 00:07:50.549 99.99000% : 25306.978us 00:07:50.549 99.99900% : 25306.978us 00:07:50.549 99.99990% : 25306.978us 00:07:50.549 99.99999% : 25306.978us 00:07:50.549 00:07:50.549 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.549 ================================================================================= 00:07:50.549 1.00000% : 6074.683us 00:07:50.549 10.00000% : 6377.157us 00:07:50.549 25.00000% : 6604.012us 00:07:50.549 50.00000% : 6906.486us 00:07:50.549 75.00000% : 7259.372us 00:07:50.549 90.00000% : 7914.732us 00:07:50.549 95.00000% : 10132.874us 00:07:50.549 98.00000% : 12300.603us 00:07:50.549 99.00000% : 13812.972us 00:07:50.549 99.50000% : 15627.815us 00:07:50.549 99.90000% : 24903.680us 00:07:50.549 99.99000% : 25206.154us 00:07:50.549 99.99900% : 25206.154us 00:07:50.549 99.99990% : 25206.154us 00:07:50.549 99.99999% : 25206.154us 00:07:50.549 00:07:50.549 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.549 ============================================================================== 00:07:50.549 Range in us Cumulative IO count 00:07:50.549 4990.818 - 5016.025: 0.0170% ( 3) 00:07:50.549 5016.025 - 5041.231: 0.0283% ( 2) 00:07:50.549 5041.231 - 5066.437: 0.0340% ( 1) 00:07:50.549 5066.437 - 5091.643: 0.0510% ( 3) 00:07:50.549 5091.643 - 5116.849: 0.0623% ( 2) 00:07:50.549 5116.849 - 5142.055: 0.0736% ( 2) 00:07:50.549 5142.055 - 5167.262: 0.0849% ( 2) 00:07:50.549 5167.262 - 5192.468: 0.0962% ( 2) 00:07:50.549 5192.468 - 5217.674: 0.1076% ( 2) 00:07:50.549 5217.674 - 5242.880: 0.1189% ( 2) 00:07:50.549 5242.880 - 5268.086: 0.1302% ( 2) 00:07:50.549 5268.086 - 5293.292: 0.1415% ( 2) 00:07:50.549 5293.292 - 5318.498: 0.1529% ( 2) 00:07:50.549 5318.498 - 5343.705: 0.1642% ( 2) 00:07:50.549 5343.705 - 5368.911: 0.1755% ( 2) 00:07:50.549 5368.911 - 5394.117: 0.1868% ( 2) 00:07:50.549 5394.117 - 5419.323: 0.1981% ( 2) 00:07:50.549 5419.323 - 5444.529: 0.2095% ( 2) 00:07:50.549 5444.529 - 5469.735: 0.2208% ( 2) 00:07:50.549 5469.735 - 5494.942: 0.2321% ( 2) 00:07:50.549 5494.942 - 5520.148: 0.2434% ( 2) 00:07:50.549 5520.148 - 5545.354: 0.2548% ( 2) 00:07:50.549 5545.354 - 5570.560: 0.2717% ( 3) 00:07:50.549 5570.560 - 5595.766: 0.2774% ( 1) 00:07:50.549 5595.766 - 5620.972: 0.2887% ( 2) 00:07:50.549 5620.972 - 5646.178: 0.3000% ( 2) 00:07:50.549 5646.178 - 5671.385: 0.3057% ( 1) 00:07:50.549 5671.385 - 5696.591: 0.3170% ( 2) 00:07:50.549 5696.591 - 5721.797: 0.3340% ( 3) 00:07:50.549 5721.797 - 5747.003: 0.3397% ( 1) 00:07:50.549 5747.003 - 5772.209: 0.3510% ( 2) 00:07:50.549 5772.209 - 5797.415: 0.3623% ( 2) 00:07:50.549 5822.622 - 5847.828: 0.3793% ( 3) 00:07:50.549 5847.828 - 5873.034: 0.4133% ( 6) 00:07:50.549 5873.034 - 5898.240: 0.4755% ( 11) 00:07:50.549 5898.240 - 5923.446: 0.5605% ( 15) 00:07:50.549 5923.446 - 5948.652: 0.6907% ( 23) 00:07:50.549 5948.652 - 5973.858: 0.9058% ( 38) 00:07:50.549 5973.858 - 5999.065: 1.0983% ( 34) 00:07:50.549 5999.065 - 6024.271: 1.3191% ( 39) 00:07:50.549 6024.271 - 6049.477: 1.6587% ( 60) 00:07:50.549 6049.477 - 6074.683: 2.1286% ( 83) 00:07:50.549 6074.683 - 6099.889: 2.5419% ( 73) 00:07:50.549 6099.889 - 6125.095: 2.9721% ( 76) 00:07:50.549 6125.095 - 6150.302: 3.6232% ( 115) 00:07:50.549 6150.302 - 6175.508: 4.4667% ( 149) 00:07:50.549 6175.508 - 6200.714: 5.2536% ( 139) 00:07:50.549 6200.714 - 6225.920: 6.1764% ( 163) 00:07:50.549 6225.920 - 6251.126: 7.1671% ( 175) 00:07:50.549 6251.126 - 6276.332: 8.2371% ( 189) 00:07:50.549 6276.332 - 6301.538: 9.3354% ( 194) 00:07:50.549 6301.538 - 6326.745: 10.4676% ( 200) 00:07:50.549 6326.745 - 6351.951: 11.7018% ( 218) 00:07:50.549 6351.951 - 6377.157: 13.0038% ( 230) 00:07:50.549 6377.157 - 6402.363: 14.4192% ( 250) 00:07:50.549 6402.363 - 6427.569: 15.8005% ( 244) 00:07:50.549 6427.569 - 6452.775: 17.1762% ( 243) 00:07:50.549 6452.775 - 6503.188: 20.2332% ( 540) 00:07:50.549 6503.188 - 6553.600: 23.4828% ( 574) 00:07:50.549 6553.600 - 6604.012: 26.9135% ( 606) 00:07:50.549 6604.012 - 6654.425: 30.4574% ( 626) 00:07:50.549 6654.425 - 6704.837: 34.1769% ( 657) 00:07:50.549 6704.837 - 6755.249: 38.0831% ( 690) 00:07:50.549 6755.249 - 6805.662: 41.9158% ( 677) 00:07:50.549 6805.662 - 6856.074: 45.8163% ( 689) 00:07:50.549 6856.074 - 6906.486: 49.8415% ( 711) 00:07:50.549 6906.486 - 6956.898: 53.7534% ( 691) 00:07:50.549 6956.898 - 7007.311: 57.5408% ( 669) 00:07:50.549 7007.311 - 7057.723: 61.2659% ( 658) 00:07:50.549 7057.723 - 7108.135: 64.8041% ( 625) 00:07:50.549 7108.135 - 7158.548: 68.1046% ( 583) 00:07:50.549 7158.548 - 7208.960: 71.0145% ( 514) 00:07:50.549 7208.960 - 7259.372: 73.6526% ( 466) 00:07:50.549 7259.372 - 7309.785: 75.9681% ( 409) 00:07:50.549 7309.785 - 7360.197: 78.1646% ( 388) 00:07:50.549 7360.197 - 7410.609: 80.0725% ( 337) 00:07:50.549 7410.609 - 7461.022: 81.6180% ( 273) 00:07:50.549 7461.022 - 7511.434: 83.1522% ( 271) 00:07:50.549 7511.434 - 7561.846: 84.3976% ( 220) 00:07:50.549 7561.846 - 7612.258: 85.4563% ( 187) 00:07:50.549 7612.258 - 7662.671: 86.3904% ( 165) 00:07:50.549 7662.671 - 7713.083: 87.1603% ( 136) 00:07:50.549 7713.083 - 7763.495: 87.8284% ( 118) 00:07:50.549 7763.495 - 7813.908: 88.4341% ( 107) 00:07:50.549 7813.908 - 7864.320: 88.9436% ( 90) 00:07:50.549 7864.320 - 7914.732: 89.3625% ( 74) 00:07:50.549 7914.732 - 7965.145: 89.7362% ( 66) 00:07:50.549 7965.145 - 8015.557: 90.0985% ( 64) 00:07:50.549 8015.557 - 8065.969: 90.4382% ( 60) 00:07:50.549 8065.969 - 8116.382: 90.7043% ( 47) 00:07:50.549 8116.382 - 8166.794: 90.8798% ( 31) 00:07:50.549 8166.794 - 8217.206: 91.0383% ( 28) 00:07:50.549 8217.206 - 8267.618: 91.1798% ( 25) 00:07:50.549 8267.618 - 8318.031: 91.3043% ( 22) 00:07:50.549 8318.031 - 8368.443: 91.4232% ( 21) 00:07:50.549 8368.443 - 8418.855: 91.5138% ( 16) 00:07:50.549 8418.855 - 8469.268: 91.6214% ( 19) 00:07:50.549 8469.268 - 8519.680: 91.7233% ( 18) 00:07:50.549 8519.680 - 8570.092: 91.8308% ( 19) 00:07:50.549 8570.092 - 8620.505: 91.9214% ( 16) 00:07:50.549 8620.505 - 8670.917: 92.0346% ( 20) 00:07:50.549 8670.917 - 8721.329: 92.1365% ( 18) 00:07:50.549 8721.329 - 8771.742: 92.2441% ( 19) 00:07:50.549 8771.742 - 8822.154: 92.3517% ( 19) 00:07:50.549 8822.154 - 8872.566: 92.4536% ( 18) 00:07:50.549 8872.566 - 8922.978: 92.5725% ( 21) 00:07:50.549 8922.978 - 8973.391: 92.6913% ( 21) 00:07:50.549 8973.391 - 9023.803: 92.8046% ( 20) 00:07:50.549 9023.803 - 9074.215: 92.9178% ( 20) 00:07:50.549 9074.215 - 9124.628: 93.0197% ( 18) 00:07:50.549 9124.628 - 9175.040: 93.1216% ( 18) 00:07:50.549 9175.040 - 9225.452: 93.2178% ( 17) 00:07:50.549 9225.452 - 9275.865: 93.2971% ( 14) 00:07:50.549 9275.865 - 9326.277: 93.3933% ( 17) 00:07:50.549 9326.277 - 9376.689: 93.4896% ( 17) 00:07:50.549 9376.689 - 9427.102: 93.5688% ( 14) 00:07:50.549 9427.102 - 9477.514: 93.6651% ( 17) 00:07:50.549 9477.514 - 9527.926: 93.7557% ( 16) 00:07:50.549 9527.926 - 9578.338: 93.8349% ( 14) 00:07:50.549 9578.338 - 9628.751: 93.9085% ( 13) 00:07:50.549 9628.751 - 9679.163: 94.0161% ( 19) 00:07:50.549 9679.163 - 9729.575: 94.0897% ( 13) 00:07:50.549 9729.575 - 9779.988: 94.1463% ( 10) 00:07:50.549 9779.988 - 9830.400: 94.2312% ( 15) 00:07:50.549 9830.400 - 9880.812: 94.2991% ( 12) 00:07:50.549 9880.812 - 9931.225: 94.3614% ( 11) 00:07:50.549 9931.225 - 9981.637: 94.4350% ( 13) 00:07:50.549 9981.637 - 10032.049: 94.5256% ( 16) 00:07:50.549 10032.049 - 10082.462: 94.6162% ( 16) 00:07:50.549 10082.462 - 10132.874: 94.6954% ( 14) 00:07:50.549 10132.874 - 10183.286: 94.7690% ( 13) 00:07:50.549 10183.286 - 10233.698: 94.8313% ( 11) 00:07:50.549 10233.698 - 10284.111: 94.8992% ( 12) 00:07:50.549 10284.111 - 10334.523: 94.9785% ( 14) 00:07:50.549 10334.523 - 10384.935: 95.0408% ( 11) 00:07:50.549 10384.935 - 10435.348: 95.1313% ( 16) 00:07:50.549 10435.348 - 10485.760: 95.2276% ( 17) 00:07:50.549 10485.760 - 10536.172: 95.3351% ( 19) 00:07:50.549 10536.172 - 10586.585: 95.4031% ( 12) 00:07:50.549 10586.585 - 10636.997: 95.5050% ( 18) 00:07:50.549 10636.997 - 10687.409: 95.5842% ( 14) 00:07:50.549 10687.409 - 10737.822: 95.6861% ( 18) 00:07:50.549 10737.822 - 10788.234: 95.8107% ( 22) 00:07:50.549 10788.234 - 10838.646: 95.9069% ( 17) 00:07:50.549 10838.646 - 10889.058: 96.0145% ( 19) 00:07:50.549 10889.058 - 10939.471: 96.0994% ( 15) 00:07:50.549 10939.471 - 10989.883: 96.1730% ( 13) 00:07:50.550 10989.883 - 11040.295: 96.2579% ( 15) 00:07:50.550 11040.295 - 11090.708: 96.3315% ( 13) 00:07:50.550 11090.708 - 11141.120: 96.4051% ( 13) 00:07:50.550 11141.120 - 11191.532: 96.4787% ( 13) 00:07:50.550 11191.532 - 11241.945: 96.5693% ( 16) 00:07:50.550 11241.945 - 11292.357: 96.6316% ( 11) 00:07:50.550 11292.357 - 11342.769: 96.7052% ( 13) 00:07:50.550 11342.769 - 11393.182: 96.7674% ( 11) 00:07:50.550 11393.182 - 11443.594: 96.8524% ( 15) 00:07:50.550 11443.594 - 11494.006: 96.9316% ( 14) 00:07:50.550 11494.006 - 11544.418: 96.9882% ( 10) 00:07:50.550 11544.418 - 11594.831: 97.0505% ( 11) 00:07:50.550 11594.831 - 11645.243: 97.1128% ( 11) 00:07:50.550 11645.243 - 11695.655: 97.1750% ( 11) 00:07:50.550 11695.655 - 11746.068: 97.2373% ( 11) 00:07:50.550 11746.068 - 11796.480: 97.2883% ( 9) 00:07:50.550 11796.480 - 11846.892: 97.3392% ( 9) 00:07:50.550 11846.892 - 11897.305: 97.4241% ( 15) 00:07:50.550 11897.305 - 11947.717: 97.4694% ( 8) 00:07:50.550 11947.717 - 11998.129: 97.5374% ( 12) 00:07:50.550 11998.129 - 12048.542: 97.5883% ( 9) 00:07:50.550 12048.542 - 12098.954: 97.6506% ( 11) 00:07:50.550 12098.954 - 12149.366: 97.7355% ( 15) 00:07:50.550 12149.366 - 12199.778: 97.7865% ( 9) 00:07:50.550 12199.778 - 12250.191: 97.8770% ( 16) 00:07:50.550 12250.191 - 12300.603: 97.9223% ( 8) 00:07:50.550 12300.603 - 12351.015: 98.0186% ( 17) 00:07:50.550 12351.015 - 12401.428: 98.0808% ( 11) 00:07:50.550 12401.428 - 12451.840: 98.1261% ( 8) 00:07:50.550 12451.840 - 12502.252: 98.1827% ( 10) 00:07:50.550 12502.252 - 12552.665: 98.2280% ( 8) 00:07:50.550 12552.665 - 12603.077: 98.2677% ( 7) 00:07:50.550 12603.077 - 12653.489: 98.3243% ( 10) 00:07:50.550 12653.489 - 12703.902: 98.3752% ( 9) 00:07:50.550 12703.902 - 12754.314: 98.4149% ( 7) 00:07:50.550 12754.314 - 12804.726: 98.4658% ( 9) 00:07:50.550 12804.726 - 12855.138: 98.4998% ( 6) 00:07:50.550 12855.138 - 12905.551: 98.5620% ( 11) 00:07:50.550 12905.551 - 13006.375: 98.6243% ( 11) 00:07:50.550 13006.375 - 13107.200: 98.6753% ( 9) 00:07:50.550 13107.200 - 13208.025: 98.7319% ( 10) 00:07:50.550 13208.025 - 13308.849: 98.7659% ( 6) 00:07:50.550 13308.849 - 13409.674: 98.8055% ( 7) 00:07:50.550 13409.674 - 13510.498: 98.8281% ( 4) 00:07:50.550 13510.498 - 13611.323: 98.8451% ( 3) 00:07:50.550 13611.323 - 13712.148: 98.8621% ( 3) 00:07:50.550 13712.148 - 13812.972: 98.8791% ( 3) 00:07:50.550 13812.972 - 13913.797: 98.9074% ( 5) 00:07:50.550 13913.797 - 14014.622: 98.9583% ( 9) 00:07:50.550 14014.622 - 14115.446: 98.9866% ( 5) 00:07:50.550 14115.446 - 14216.271: 99.0149% ( 5) 00:07:50.550 14216.271 - 14317.095: 99.0319% ( 3) 00:07:50.550 14317.095 - 14417.920: 99.0602% ( 5) 00:07:50.550 14417.920 - 14518.745: 99.0885% ( 5) 00:07:50.550 14518.745 - 14619.569: 99.1055% ( 3) 00:07:50.550 14619.569 - 14720.394: 99.1338% ( 5) 00:07:50.550 14720.394 - 14821.218: 99.1678% ( 6) 00:07:50.550 14821.218 - 14922.043: 99.1904% ( 4) 00:07:50.550 14922.043 - 15022.868: 99.2188% ( 5) 00:07:50.550 15022.868 - 15123.692: 99.2697% ( 9) 00:07:50.550 15123.692 - 15224.517: 99.3320% ( 11) 00:07:50.550 15224.517 - 15325.342: 99.3659% ( 6) 00:07:50.550 15325.342 - 15426.166: 99.3942% ( 5) 00:07:50.550 15426.166 - 15526.991: 99.4226% ( 5) 00:07:50.550 15526.991 - 15627.815: 99.4565% ( 6) 00:07:50.550 15627.815 - 15728.640: 99.4792% ( 4) 00:07:50.550 15728.640 - 15829.465: 99.5131% ( 6) 00:07:50.550 15829.465 - 15930.289: 99.5471% ( 6) 00:07:50.550 15930.289 - 16031.114: 99.5697% ( 4) 00:07:50.550 16031.114 - 16131.938: 99.6037% ( 6) 00:07:50.550 16131.938 - 16232.763: 99.6320% ( 5) 00:07:50.550 16232.763 - 16333.588: 99.6377% ( 1) 00:07:50.550 23088.837 - 23189.662: 99.6660% ( 5) 00:07:50.550 23189.662 - 23290.486: 99.6943% ( 5) 00:07:50.550 23290.486 - 23391.311: 99.7000% ( 1) 00:07:50.550 23492.135 - 23592.960: 99.7283% ( 5) 00:07:50.550 23592.960 - 23693.785: 99.7566% ( 5) 00:07:50.550 23693.785 - 23794.609: 99.7849% ( 5) 00:07:50.550 23794.609 - 23895.434: 99.8132% ( 5) 00:07:50.550 23895.434 - 23996.258: 99.8415% ( 5) 00:07:50.550 23996.258 - 24097.083: 99.8811% ( 7) 00:07:50.550 24097.083 - 24197.908: 99.9038% ( 4) 00:07:50.550 24197.908 - 24298.732: 99.9377% ( 6) 00:07:50.550 24298.732 - 24399.557: 99.9604% ( 4) 00:07:50.550 24399.557 - 24500.382: 99.9887% ( 5) 00:07:50.550 24500.382 - 24601.206: 100.0000% ( 2) 00:07:50.550 00:07:50.550 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.550 ============================================================================== 00:07:50.550 Range in us Cumulative IO count 00:07:50.550 4789.169 - 4814.375: 0.0057% ( 1) 00:07:50.550 4814.375 - 4839.582: 0.0510% ( 8) 00:07:50.550 4839.582 - 4864.788: 0.0566% ( 1) 00:07:50.550 4864.788 - 4889.994: 0.0623% ( 1) 00:07:50.550 4889.994 - 4915.200: 0.0736% ( 2) 00:07:50.550 4915.200 - 4940.406: 0.0849% ( 2) 00:07:50.550 4940.406 - 4965.612: 0.0962% ( 2) 00:07:50.550 4965.612 - 4990.818: 0.1189% ( 4) 00:07:50.550 4990.818 - 5016.025: 0.1302% ( 2) 00:07:50.550 5016.025 - 5041.231: 0.1529% ( 4) 00:07:50.550 5041.231 - 5066.437: 0.1642% ( 2) 00:07:50.550 5066.437 - 5091.643: 0.1812% ( 3) 00:07:50.550 5091.643 - 5116.849: 0.1868% ( 1) 00:07:50.550 5116.849 - 5142.055: 0.2038% ( 3) 00:07:50.550 5142.055 - 5167.262: 0.2151% ( 2) 00:07:50.550 5167.262 - 5192.468: 0.2264% ( 2) 00:07:50.550 5192.468 - 5217.674: 0.2434% ( 3) 00:07:50.550 5217.674 - 5242.880: 0.2548% ( 2) 00:07:50.550 5242.880 - 5268.086: 0.2717% ( 3) 00:07:50.550 5268.086 - 5293.292: 0.2831% ( 2) 00:07:50.550 5293.292 - 5318.498: 0.3000% ( 3) 00:07:50.550 5318.498 - 5343.705: 0.3114% ( 2) 00:07:50.550 5343.705 - 5368.911: 0.3227% ( 2) 00:07:50.550 5368.911 - 5394.117: 0.3397% ( 3) 00:07:50.550 5394.117 - 5419.323: 0.3510% ( 2) 00:07:50.550 5419.323 - 5444.529: 0.3623% ( 2) 00:07:50.550 5898.240 - 5923.446: 0.3736% ( 2) 00:07:50.550 5923.446 - 5948.652: 0.4529% ( 14) 00:07:50.550 5948.652 - 5973.858: 0.5491% ( 17) 00:07:50.550 5973.858 - 5999.065: 0.6171% ( 12) 00:07:50.550 5999.065 - 6024.271: 0.7416% ( 22) 00:07:50.550 6024.271 - 6049.477: 0.9171% ( 31) 00:07:50.550 6049.477 - 6074.683: 1.2398% ( 57) 00:07:50.550 6074.683 - 6099.889: 1.5908% ( 62) 00:07:50.550 6099.889 - 6125.095: 1.9758% ( 68) 00:07:50.550 6125.095 - 6150.302: 2.3834% ( 72) 00:07:50.550 6150.302 - 6175.508: 2.9212% ( 95) 00:07:50.550 6175.508 - 6200.714: 3.4703% ( 97) 00:07:50.550 6200.714 - 6225.920: 4.2120% ( 131) 00:07:50.550 6225.920 - 6251.126: 5.1291% ( 162) 00:07:50.550 6251.126 - 6276.332: 6.0915% ( 170) 00:07:50.550 6276.332 - 6301.538: 7.1388% ( 185) 00:07:50.550 6301.538 - 6326.745: 8.3050% ( 206) 00:07:50.550 6326.745 - 6351.951: 9.4769% ( 207) 00:07:50.550 6351.951 - 6377.157: 10.8865% ( 249) 00:07:50.550 6377.157 - 6402.363: 12.3302% ( 255) 00:07:50.550 6402.363 - 6427.569: 13.7172% ( 245) 00:07:50.550 6427.569 - 6452.775: 15.1325% ( 250) 00:07:50.550 6452.775 - 6503.188: 18.3707% ( 572) 00:07:50.550 6503.188 - 6553.600: 21.9882% ( 639) 00:07:50.550 6553.600 - 6604.012: 25.6624% ( 649) 00:07:50.550 6604.012 - 6654.425: 29.6422% ( 703) 00:07:50.550 6654.425 - 6704.837: 33.8202% ( 738) 00:07:50.550 6704.837 - 6755.249: 38.1058% ( 757) 00:07:50.550 6755.249 - 6805.662: 42.3290% ( 746) 00:07:50.550 6805.662 - 6856.074: 46.6316% ( 760) 00:07:50.550 6856.074 - 6906.486: 50.9964% ( 771) 00:07:50.550 6906.486 - 6956.898: 55.1461% ( 733) 00:07:50.550 6956.898 - 7007.311: 59.2108% ( 718) 00:07:50.550 7007.311 - 7057.723: 63.1058% ( 688) 00:07:50.550 7057.723 - 7108.135: 66.5308% ( 605) 00:07:50.550 7108.135 - 7158.548: 69.6784% ( 556) 00:07:50.550 7158.548 - 7208.960: 72.4468% ( 489) 00:07:50.550 7208.960 - 7259.372: 74.9604% ( 444) 00:07:50.550 7259.372 - 7309.785: 77.2475% ( 404) 00:07:50.550 7309.785 - 7360.197: 79.3025% ( 363) 00:07:50.550 7360.197 - 7410.609: 81.1255% ( 322) 00:07:50.550 7410.609 - 7461.022: 82.7276% ( 283) 00:07:50.550 7461.022 - 7511.434: 84.1202% ( 246) 00:07:50.550 7511.434 - 7561.846: 85.3431% ( 216) 00:07:50.550 7561.846 - 7612.258: 86.3111% ( 171) 00:07:50.550 7612.258 - 7662.671: 87.2056% ( 158) 00:07:50.550 7662.671 - 7713.083: 87.9529% ( 132) 00:07:50.550 7713.083 - 7763.495: 88.6153% ( 117) 00:07:50.550 7763.495 - 7813.908: 89.1474% ( 94) 00:07:50.550 7813.908 - 7864.320: 89.5720% ( 75) 00:07:50.550 7864.320 - 7914.732: 89.9740% ( 71) 00:07:50.550 7914.732 - 7965.145: 90.3589% ( 68) 00:07:50.550 7965.145 - 8015.557: 90.6363% ( 49) 00:07:50.550 8015.557 - 8065.969: 90.8401% ( 36) 00:07:50.550 8065.969 - 8116.382: 90.9873% ( 26) 00:07:50.550 8116.382 - 8166.794: 91.1345% ( 26) 00:07:50.550 8166.794 - 8217.206: 91.2364% ( 18) 00:07:50.550 8217.206 - 8267.618: 91.3327% ( 17) 00:07:50.550 8267.618 - 8318.031: 91.4459% ( 20) 00:07:50.550 8318.031 - 8368.443: 91.5591% ( 20) 00:07:50.550 8368.443 - 8418.855: 91.6440% ( 15) 00:07:50.550 8418.855 - 8469.268: 91.7403% ( 17) 00:07:50.550 8469.268 - 8519.680: 91.8252% ( 15) 00:07:50.550 8519.680 - 8570.092: 91.9611% ( 24) 00:07:50.550 8570.092 - 8620.505: 92.0856% ( 22) 00:07:50.550 8620.505 - 8670.917: 92.1932% ( 19) 00:07:50.550 8670.917 - 8721.329: 92.3290% ( 24) 00:07:50.550 8721.329 - 8771.742: 92.4479% ( 21) 00:07:50.550 8771.742 - 8822.154: 92.5668% ( 21) 00:07:50.550 8822.154 - 8872.566: 92.6687% ( 18) 00:07:50.550 8872.566 - 8922.978: 92.7649% ( 17) 00:07:50.550 8922.978 - 8973.391: 92.8782% ( 20) 00:07:50.550 8973.391 - 9023.803: 92.9857% ( 19) 00:07:50.550 9023.803 - 9074.215: 93.0820% ( 17) 00:07:50.550 9074.215 - 9124.628: 93.1895% ( 19) 00:07:50.550 9124.628 - 9175.040: 93.2631% ( 13) 00:07:50.550 9175.040 - 9225.452: 93.3481% ( 15) 00:07:50.550 9225.452 - 9275.865: 93.4216% ( 13) 00:07:50.550 9275.865 - 9326.277: 93.4952% ( 13) 00:07:50.550 9326.277 - 9376.689: 93.5519% ( 10) 00:07:50.550 9376.689 - 9427.102: 93.6085% ( 10) 00:07:50.551 9427.102 - 9477.514: 93.6594% ( 9) 00:07:50.551 9477.514 - 9527.926: 93.7160% ( 10) 00:07:50.551 9527.926 - 9578.338: 93.7557% ( 7) 00:07:50.551 9578.338 - 9628.751: 93.7783% ( 4) 00:07:50.551 9628.751 - 9679.163: 93.8236% ( 8) 00:07:50.551 9679.163 - 9729.575: 93.8859% ( 11) 00:07:50.551 9729.575 - 9779.988: 93.9481% ( 11) 00:07:50.551 9779.988 - 9830.400: 94.0161% ( 12) 00:07:50.551 9830.400 - 9880.812: 94.0727% ( 10) 00:07:50.551 9880.812 - 9931.225: 94.1180% ( 8) 00:07:50.551 9931.225 - 9981.637: 94.1803% ( 11) 00:07:50.551 9981.637 - 10032.049: 94.2482% ( 12) 00:07:50.551 10032.049 - 10082.462: 94.3558% ( 19) 00:07:50.551 10082.462 - 10132.874: 94.4407% ( 15) 00:07:50.551 10132.874 - 10183.286: 94.5426% ( 18) 00:07:50.551 10183.286 - 10233.698: 94.6388% ( 17) 00:07:50.551 10233.698 - 10284.111: 94.7351% ( 17) 00:07:50.551 10284.111 - 10334.523: 94.8256% ( 16) 00:07:50.551 10334.523 - 10384.935: 94.9332% ( 19) 00:07:50.551 10384.935 - 10435.348: 95.0238% ( 16) 00:07:50.551 10435.348 - 10485.760: 95.1257% ( 18) 00:07:50.551 10485.760 - 10536.172: 95.1936% ( 12) 00:07:50.551 10536.172 - 10586.585: 95.3125% ( 21) 00:07:50.551 10586.585 - 10636.997: 95.4654% ( 27) 00:07:50.551 10636.997 - 10687.409: 95.5956% ( 23) 00:07:50.551 10687.409 - 10737.822: 95.7258% ( 23) 00:07:50.551 10737.822 - 10788.234: 95.8616% ( 24) 00:07:50.551 10788.234 - 10838.646: 96.0032% ( 25) 00:07:50.551 10838.646 - 10889.058: 96.1277% ( 22) 00:07:50.551 10889.058 - 10939.471: 96.2296% ( 18) 00:07:50.551 10939.471 - 10989.883: 96.3089% ( 14) 00:07:50.551 10989.883 - 11040.295: 96.3881% ( 14) 00:07:50.551 11040.295 - 11090.708: 96.4561% ( 12) 00:07:50.551 11090.708 - 11141.120: 96.5183% ( 11) 00:07:50.551 11141.120 - 11191.532: 96.5919% ( 13) 00:07:50.551 11191.532 - 11241.945: 96.6712% ( 14) 00:07:50.551 11241.945 - 11292.357: 96.7448% ( 13) 00:07:50.551 11292.357 - 11342.769: 96.8184% ( 13) 00:07:50.551 11342.769 - 11393.182: 96.8976% ( 14) 00:07:50.551 11393.182 - 11443.594: 96.9599% ( 11) 00:07:50.551 11443.594 - 11494.006: 97.0052% ( 8) 00:07:50.551 11494.006 - 11544.418: 97.0505% ( 8) 00:07:50.551 11544.418 - 11594.831: 97.0958% ( 8) 00:07:50.551 11594.831 - 11645.243: 97.1467% ( 9) 00:07:50.551 11645.243 - 11695.655: 97.1977% ( 9) 00:07:50.551 11695.655 - 11746.068: 97.2486% ( 9) 00:07:50.551 11746.068 - 11796.480: 97.3053% ( 10) 00:07:50.551 11796.480 - 11846.892: 97.3619% ( 10) 00:07:50.551 11846.892 - 11897.305: 97.4241% ( 11) 00:07:50.551 11897.305 - 11947.717: 97.4864% ( 11) 00:07:50.551 11947.717 - 11998.129: 97.5260% ( 7) 00:07:50.551 11998.129 - 12048.542: 97.5770% ( 9) 00:07:50.551 12048.542 - 12098.954: 97.6393% ( 11) 00:07:50.551 12098.954 - 12149.366: 97.6902% ( 9) 00:07:50.551 12149.366 - 12199.778: 97.7525% ( 11) 00:07:50.551 12199.778 - 12250.191: 97.8148% ( 11) 00:07:50.551 12250.191 - 12300.603: 97.8657% ( 9) 00:07:50.551 12300.603 - 12351.015: 97.9223% ( 10) 00:07:50.551 12351.015 - 12401.428: 97.9733% ( 9) 00:07:50.551 12401.428 - 12451.840: 98.0186% ( 8) 00:07:50.551 12451.840 - 12502.252: 98.0582% ( 7) 00:07:50.551 12502.252 - 12552.665: 98.1091% ( 9) 00:07:50.551 12552.665 - 12603.077: 98.1544% ( 8) 00:07:50.551 12603.077 - 12653.489: 98.2054% ( 9) 00:07:50.551 12653.489 - 12703.902: 98.2507% ( 8) 00:07:50.551 12703.902 - 12754.314: 98.3073% ( 10) 00:07:50.551 12754.314 - 12804.726: 98.3639% ( 10) 00:07:50.551 12804.726 - 12855.138: 98.4262% ( 11) 00:07:50.551 12855.138 - 12905.551: 98.4771% ( 9) 00:07:50.551 12905.551 - 13006.375: 98.5677% ( 16) 00:07:50.551 13006.375 - 13107.200: 98.6187% ( 9) 00:07:50.551 13107.200 - 13208.025: 98.6979% ( 14) 00:07:50.551 13208.025 - 13308.849: 98.7828% ( 15) 00:07:50.551 13308.849 - 13409.674: 98.8508% ( 12) 00:07:50.551 13409.674 - 13510.498: 98.9074% ( 10) 00:07:50.551 13510.498 - 13611.323: 98.9583% ( 9) 00:07:50.551 13611.323 - 13712.148: 99.0149% ( 10) 00:07:50.551 13712.148 - 13812.972: 99.0716% ( 10) 00:07:50.551 13812.972 - 13913.797: 99.1282% ( 10) 00:07:50.551 13913.797 - 14014.622: 99.1848% ( 10) 00:07:50.551 14014.622 - 14115.446: 99.2357% ( 9) 00:07:50.551 14115.446 - 14216.271: 99.2640% ( 5) 00:07:50.551 14216.271 - 14317.095: 99.2754% ( 2) 00:07:50.551 15526.991 - 15627.815: 99.3093% ( 6) 00:07:50.551 15627.815 - 15728.640: 99.3433% ( 6) 00:07:50.551 15728.640 - 15829.465: 99.3773% ( 6) 00:07:50.551 15829.465 - 15930.289: 99.4112% ( 6) 00:07:50.551 15930.289 - 16031.114: 99.4452% ( 6) 00:07:50.551 16031.114 - 16131.938: 99.4792% ( 6) 00:07:50.551 16131.938 - 16232.763: 99.5131% ( 6) 00:07:50.551 16232.763 - 16333.588: 99.5471% ( 6) 00:07:50.551 16333.588 - 16434.412: 99.5697% ( 4) 00:07:50.551 16434.412 - 16535.237: 99.6094% ( 7) 00:07:50.551 16535.237 - 16636.062: 99.6377% ( 5) 00:07:50.551 22786.363 - 22887.188: 99.6433% ( 1) 00:07:50.551 22887.188 - 22988.012: 99.6547% ( 2) 00:07:50.551 22988.012 - 23088.837: 99.6716% ( 3) 00:07:50.551 23088.837 - 23189.662: 99.6886% ( 3) 00:07:50.551 23189.662 - 23290.486: 99.7056% ( 3) 00:07:50.551 23290.486 - 23391.311: 99.7283% ( 4) 00:07:50.551 23391.311 - 23492.135: 99.7452% ( 3) 00:07:50.551 23492.135 - 23592.960: 99.7679% ( 4) 00:07:50.551 23592.960 - 23693.785: 99.7905% ( 4) 00:07:50.551 23693.785 - 23794.609: 99.8132% ( 4) 00:07:50.551 23794.609 - 23895.434: 99.8302% ( 3) 00:07:50.551 23895.434 - 23996.258: 99.8415% ( 2) 00:07:50.551 23996.258 - 24097.083: 99.8641% ( 4) 00:07:50.551 24097.083 - 24197.908: 99.8811% ( 3) 00:07:50.551 24197.908 - 24298.732: 99.9038% ( 4) 00:07:50.551 24298.732 - 24399.557: 99.9264% ( 4) 00:07:50.551 24399.557 - 24500.382: 99.9490% ( 4) 00:07:50.551 24500.382 - 24601.206: 99.9717% ( 4) 00:07:50.551 24601.206 - 24702.031: 99.9887% ( 3) 00:07:50.551 24702.031 - 24802.855: 100.0000% ( 2) 00:07:50.551 00:07:50.551 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.551 ============================================================================== 00:07:50.551 Range in us Cumulative IO count 00:07:50.551 4032.985 - 4058.191: 0.0113% ( 2) 00:07:50.551 4058.191 - 4083.397: 0.0226% ( 2) 00:07:50.551 4083.397 - 4108.603: 0.0340% ( 2) 00:07:50.551 4108.603 - 4133.809: 0.0453% ( 2) 00:07:50.551 4133.809 - 4159.015: 0.0566% ( 2) 00:07:50.551 4159.015 - 4184.222: 0.0793% ( 4) 00:07:50.551 4184.222 - 4209.428: 0.0906% ( 2) 00:07:50.551 4209.428 - 4234.634: 0.1076% ( 3) 00:07:50.551 4234.634 - 4259.840: 0.1189% ( 2) 00:07:50.551 4259.840 - 4285.046: 0.1302% ( 2) 00:07:50.551 4285.046 - 4310.252: 0.1415% ( 2) 00:07:50.551 4310.252 - 4335.458: 0.1585% ( 3) 00:07:50.551 4335.458 - 4360.665: 0.1698% ( 2) 00:07:50.551 4360.665 - 4385.871: 0.1868% ( 3) 00:07:50.551 4385.871 - 4411.077: 0.1981% ( 2) 00:07:50.551 4411.077 - 4436.283: 0.2095% ( 2) 00:07:50.551 4436.283 - 4461.489: 0.2264% ( 3) 00:07:50.551 4461.489 - 4486.695: 0.2378% ( 2) 00:07:50.551 4486.695 - 4511.902: 0.2548% ( 3) 00:07:50.551 4511.902 - 4537.108: 0.2604% ( 1) 00:07:50.551 4537.108 - 4562.314: 0.2774% ( 3) 00:07:50.551 4562.314 - 4587.520: 0.2887% ( 2) 00:07:50.551 4587.520 - 4612.726: 0.3000% ( 2) 00:07:50.551 4612.726 - 4637.932: 0.3114% ( 2) 00:07:50.551 4637.932 - 4663.138: 0.3227% ( 2) 00:07:50.551 4663.138 - 4688.345: 0.3397% ( 3) 00:07:50.551 4688.345 - 4713.551: 0.3510% ( 2) 00:07:50.551 4713.551 - 4738.757: 0.3623% ( 2) 00:07:50.551 5873.034 - 5898.240: 0.3906% ( 5) 00:07:50.551 5898.240 - 5923.446: 0.4189% ( 5) 00:07:50.551 5923.446 - 5948.652: 0.5265% ( 19) 00:07:50.551 5948.652 - 5973.858: 0.6114% ( 15) 00:07:50.551 5973.858 - 5999.065: 0.7077% ( 17) 00:07:50.551 5999.065 - 6024.271: 0.8379% ( 23) 00:07:50.551 6024.271 - 6049.477: 1.0247% ( 33) 00:07:50.551 6049.477 - 6074.683: 1.2511% ( 40) 00:07:50.551 6074.683 - 6099.889: 1.5682% ( 56) 00:07:50.551 6099.889 - 6125.095: 2.0267% ( 81) 00:07:50.551 6125.095 - 6150.302: 2.4400% ( 73) 00:07:50.551 6150.302 - 6175.508: 2.9608% ( 92) 00:07:50.551 6175.508 - 6200.714: 3.6005% ( 113) 00:07:50.551 6200.714 - 6225.920: 4.3082% ( 125) 00:07:50.551 6225.920 - 6251.126: 5.1178% ( 143) 00:07:50.551 6251.126 - 6276.332: 6.0009% ( 156) 00:07:50.551 6276.332 - 6301.538: 7.0086% ( 178) 00:07:50.551 6301.538 - 6326.745: 8.2314% ( 216) 00:07:50.551 6326.745 - 6351.951: 9.4712% ( 219) 00:07:50.551 6351.951 - 6377.157: 10.7846% ( 232) 00:07:50.551 6377.157 - 6402.363: 12.2000% ( 250) 00:07:50.551 6402.363 - 6427.569: 13.6719% ( 260) 00:07:50.551 6427.569 - 6452.775: 15.2344% ( 276) 00:07:50.551 6452.775 - 6503.188: 18.6311% ( 600) 00:07:50.551 6503.188 - 6553.600: 22.2147% ( 633) 00:07:50.551 6553.600 - 6604.012: 26.0530% ( 678) 00:07:50.551 6604.012 - 6654.425: 30.0102% ( 699) 00:07:50.551 6654.425 - 6704.837: 34.0863% ( 720) 00:07:50.551 6704.837 - 6755.249: 38.4284% ( 767) 00:07:50.551 6755.249 - 6805.662: 42.7253% ( 759) 00:07:50.551 6805.662 - 6856.074: 46.9939% ( 754) 00:07:50.551 6856.074 - 6906.486: 51.2002% ( 743) 00:07:50.551 6906.486 - 6956.898: 55.3555% ( 734) 00:07:50.551 6956.898 - 7007.311: 59.4260% ( 719) 00:07:50.551 7007.311 - 7057.723: 63.2246% ( 671) 00:07:50.551 7057.723 - 7108.135: 66.9214% ( 653) 00:07:50.551 7108.135 - 7158.548: 70.1653% ( 573) 00:07:50.551 7158.548 - 7208.960: 72.9789% ( 497) 00:07:50.551 7208.960 - 7259.372: 75.5491% ( 454) 00:07:50.551 7259.372 - 7309.785: 77.8363% ( 404) 00:07:50.551 7309.785 - 7360.197: 79.8007% ( 347) 00:07:50.551 7360.197 - 7410.609: 81.4878% ( 298) 00:07:50.551 7410.609 - 7461.022: 82.9710% ( 262) 00:07:50.551 7461.022 - 7511.434: 84.3127% ( 237) 00:07:50.551 7511.434 - 7561.846: 85.5412% ( 217) 00:07:50.551 7561.846 - 7612.258: 86.6621% ( 198) 00:07:50.551 7612.258 - 7662.671: 87.6245% ( 170) 00:07:50.551 7662.671 - 7713.083: 88.4624% ( 148) 00:07:50.552 7713.083 - 7763.495: 89.1021% ( 113) 00:07:50.552 7763.495 - 7813.908: 89.6626% ( 99) 00:07:50.552 7813.908 - 7864.320: 90.1155% ( 80) 00:07:50.552 7864.320 - 7914.732: 90.4495% ( 59) 00:07:50.552 7914.732 - 7965.145: 90.7212% ( 48) 00:07:50.552 7965.145 - 8015.557: 90.9760% ( 45) 00:07:50.552 8015.557 - 8065.969: 91.1628% ( 33) 00:07:50.552 8065.969 - 8116.382: 91.3440% ( 32) 00:07:50.552 8116.382 - 8166.794: 91.4968% ( 27) 00:07:50.552 8166.794 - 8217.206: 91.6553% ( 28) 00:07:50.552 8217.206 - 8267.618: 91.7912% ( 24) 00:07:50.552 8267.618 - 8318.031: 91.8818% ( 16) 00:07:50.552 8318.031 - 8368.443: 91.9667% ( 15) 00:07:50.552 8368.443 - 8418.855: 92.0346% ( 12) 00:07:50.552 8418.855 - 8469.268: 92.1139% ( 14) 00:07:50.552 8469.268 - 8519.680: 92.1988% ( 15) 00:07:50.552 8519.680 - 8570.092: 92.2837% ( 15) 00:07:50.552 8570.092 - 8620.505: 92.3800% ( 17) 00:07:50.552 8620.505 - 8670.917: 92.4592% ( 14) 00:07:50.552 8670.917 - 8721.329: 92.5272% ( 12) 00:07:50.552 8721.329 - 8771.742: 92.5781% ( 9) 00:07:50.552 8771.742 - 8822.154: 92.6404% ( 11) 00:07:50.552 8822.154 - 8872.566: 92.6857% ( 8) 00:07:50.552 8872.566 - 8922.978: 92.7310% ( 8) 00:07:50.552 8922.978 - 8973.391: 92.7706% ( 7) 00:07:50.552 8973.391 - 9023.803: 92.8216% ( 9) 00:07:50.552 9023.803 - 9074.215: 92.8725% ( 9) 00:07:50.552 9074.215 - 9124.628: 92.9121% ( 7) 00:07:50.552 9124.628 - 9175.040: 92.9574% ( 8) 00:07:50.552 9175.040 - 9225.452: 92.9971% ( 7) 00:07:50.552 9225.452 - 9275.865: 93.0480% ( 9) 00:07:50.552 9275.865 - 9326.277: 93.1103% ( 11) 00:07:50.552 9326.277 - 9376.689: 93.1612% ( 9) 00:07:50.552 9376.689 - 9427.102: 93.2292% ( 12) 00:07:50.552 9427.102 - 9477.514: 93.2971% ( 12) 00:07:50.552 9477.514 - 9527.926: 93.3650% ( 12) 00:07:50.552 9527.926 - 9578.338: 93.4330% ( 12) 00:07:50.552 9578.338 - 9628.751: 93.5009% ( 12) 00:07:50.552 9628.751 - 9679.163: 93.5688% ( 12) 00:07:50.552 9679.163 - 9729.575: 93.6424% ( 13) 00:07:50.552 9729.575 - 9779.988: 93.7387% ( 17) 00:07:50.552 9779.988 - 9830.400: 93.8236% ( 15) 00:07:50.552 9830.400 - 9880.812: 93.9029% ( 14) 00:07:50.552 9880.812 - 9931.225: 93.9934% ( 16) 00:07:50.552 9931.225 - 9981.637: 94.0784% ( 15) 00:07:50.552 9981.637 - 10032.049: 94.1746% ( 17) 00:07:50.552 10032.049 - 10082.462: 94.3274% ( 27) 00:07:50.552 10082.462 - 10132.874: 94.4180% ( 16) 00:07:50.552 10132.874 - 10183.286: 94.5143% ( 17) 00:07:50.552 10183.286 - 10233.698: 94.6275% ( 20) 00:07:50.552 10233.698 - 10284.111: 94.7464% ( 21) 00:07:50.552 10284.111 - 10334.523: 94.8539% ( 19) 00:07:50.552 10334.523 - 10384.935: 94.9672% ( 20) 00:07:50.552 10384.935 - 10435.348: 95.0917% ( 22) 00:07:50.552 10435.348 - 10485.760: 95.2163% ( 22) 00:07:50.552 10485.760 - 10536.172: 95.3408% ( 22) 00:07:50.552 10536.172 - 10586.585: 95.4540% ( 20) 00:07:50.552 10586.585 - 10636.997: 95.5729% ( 21) 00:07:50.552 10636.997 - 10687.409: 95.6748% ( 18) 00:07:50.552 10687.409 - 10737.822: 95.7880% ( 20) 00:07:50.552 10737.822 - 10788.234: 95.8899% ( 18) 00:07:50.552 10788.234 - 10838.646: 95.9749% ( 15) 00:07:50.552 10838.646 - 10889.058: 96.0598% ( 15) 00:07:50.552 10889.058 - 10939.471: 96.1560% ( 17) 00:07:50.552 10939.471 - 10989.883: 96.2353% ( 14) 00:07:50.552 10989.883 - 11040.295: 96.3145% ( 14) 00:07:50.552 11040.295 - 11090.708: 96.3938% ( 14) 00:07:50.552 11090.708 - 11141.120: 96.4787% ( 15) 00:07:50.552 11141.120 - 11191.532: 96.5410% ( 11) 00:07:50.552 11191.532 - 11241.945: 96.6316% ( 16) 00:07:50.552 11241.945 - 11292.357: 96.6938% ( 11) 00:07:50.552 11292.357 - 11342.769: 96.7505% ( 10) 00:07:50.552 11342.769 - 11393.182: 96.8184% ( 12) 00:07:50.552 11393.182 - 11443.594: 96.8807% ( 11) 00:07:50.552 11443.594 - 11494.006: 96.9373% ( 10) 00:07:50.552 11494.006 - 11544.418: 96.9995% ( 11) 00:07:50.552 11544.418 - 11594.831: 97.0675% ( 12) 00:07:50.552 11594.831 - 11645.243: 97.1298% ( 11) 00:07:50.552 11645.243 - 11695.655: 97.1920% ( 11) 00:07:50.552 11695.655 - 11746.068: 97.2656% ( 13) 00:07:50.552 11746.068 - 11796.480: 97.3222% ( 10) 00:07:50.552 11796.480 - 11846.892: 97.3675% ( 8) 00:07:50.552 11846.892 - 11897.305: 97.4185% ( 9) 00:07:50.552 11897.305 - 11947.717: 97.4694% ( 9) 00:07:50.552 11947.717 - 11998.129: 97.5034% ( 6) 00:07:50.552 11998.129 - 12048.542: 97.5543% ( 9) 00:07:50.552 12048.542 - 12098.954: 97.6110% ( 10) 00:07:50.552 12098.954 - 12149.366: 97.6676% ( 10) 00:07:50.552 12149.366 - 12199.778: 97.7242% ( 10) 00:07:50.552 12199.778 - 12250.191: 97.7751% ( 9) 00:07:50.552 12250.191 - 12300.603: 97.8261% ( 9) 00:07:50.552 12300.603 - 12351.015: 97.8884% ( 11) 00:07:50.552 12351.015 - 12401.428: 97.9337% ( 8) 00:07:50.552 12401.428 - 12451.840: 97.9959% ( 11) 00:07:50.552 12451.840 - 12502.252: 98.0525% ( 10) 00:07:50.552 12502.252 - 12552.665: 98.1205% ( 12) 00:07:50.552 12552.665 - 12603.077: 98.1601% ( 7) 00:07:50.552 12603.077 - 12653.489: 98.1997% ( 7) 00:07:50.552 12653.489 - 12703.902: 98.2337% ( 6) 00:07:50.552 12703.902 - 12754.314: 98.2903% ( 10) 00:07:50.552 12754.314 - 12804.726: 98.3413% ( 9) 00:07:50.552 12804.726 - 12855.138: 98.4092% ( 12) 00:07:50.552 12855.138 - 12905.551: 98.4715% ( 11) 00:07:50.552 12905.551 - 13006.375: 98.5734% ( 18) 00:07:50.552 13006.375 - 13107.200: 98.6696% ( 17) 00:07:50.552 13107.200 - 13208.025: 98.7772% ( 19) 00:07:50.552 13208.025 - 13308.849: 98.8791% ( 18) 00:07:50.552 13308.849 - 13409.674: 98.9527% ( 13) 00:07:50.552 13409.674 - 13510.498: 99.0036% ( 9) 00:07:50.552 13510.498 - 13611.323: 99.0602% ( 10) 00:07:50.552 13611.323 - 13712.148: 99.1112% ( 9) 00:07:50.552 13712.148 - 13812.972: 99.1565% ( 8) 00:07:50.552 13812.972 - 13913.797: 99.1791% ( 4) 00:07:50.552 13913.797 - 14014.622: 99.2074% ( 5) 00:07:50.552 14014.622 - 14115.446: 99.2301% ( 4) 00:07:50.552 14115.446 - 14216.271: 99.2527% ( 4) 00:07:50.552 14216.271 - 14317.095: 99.2754% ( 4) 00:07:50.552 15930.289 - 16031.114: 99.2867% ( 2) 00:07:50.552 16031.114 - 16131.938: 99.3207% ( 6) 00:07:50.552 16131.938 - 16232.763: 99.3546% ( 6) 00:07:50.552 16232.763 - 16333.588: 99.3886% ( 6) 00:07:50.552 16333.588 - 16434.412: 99.4226% ( 6) 00:07:50.552 16434.412 - 16535.237: 99.4565% ( 6) 00:07:50.552 16535.237 - 16636.062: 99.4905% ( 6) 00:07:50.552 16636.062 - 16736.886: 99.5245% ( 6) 00:07:50.552 16736.886 - 16837.711: 99.5584% ( 6) 00:07:50.552 16837.711 - 16938.535: 99.5924% ( 6) 00:07:50.552 16938.535 - 17039.360: 99.6264% ( 6) 00:07:50.552 17039.360 - 17140.185: 99.6377% ( 2) 00:07:50.552 23189.662 - 23290.486: 99.6603% ( 4) 00:07:50.552 23290.486 - 23391.311: 99.6773% ( 3) 00:07:50.552 23391.311 - 23492.135: 99.7000% ( 4) 00:07:50.552 23492.135 - 23592.960: 99.7169% ( 3) 00:07:50.552 23592.960 - 23693.785: 99.7339% ( 3) 00:07:50.552 23693.785 - 23794.609: 99.7566% ( 4) 00:07:50.552 23794.609 - 23895.434: 99.7736% ( 3) 00:07:50.552 23895.434 - 23996.258: 99.7962% ( 4) 00:07:50.552 23996.258 - 24097.083: 99.8188% ( 4) 00:07:50.552 24097.083 - 24197.908: 99.8415% ( 4) 00:07:50.552 24197.908 - 24298.732: 99.8585% ( 3) 00:07:50.552 24298.732 - 24399.557: 99.8811% ( 4) 00:07:50.552 24399.557 - 24500.382: 99.9038% ( 4) 00:07:50.552 24500.382 - 24601.206: 99.9207% ( 3) 00:07:50.552 24601.206 - 24702.031: 99.9434% ( 4) 00:07:50.552 24702.031 - 24802.855: 99.9660% ( 4) 00:07:50.552 24802.855 - 24903.680: 99.9830% ( 3) 00:07:50.552 24903.680 - 25004.505: 100.0000% ( 3) 00:07:50.552 00:07:50.552 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.552 ============================================================================== 00:07:50.552 Range in us Cumulative IO count 00:07:50.552 3856.542 - 3881.748: 0.0170% ( 3) 00:07:50.552 3881.748 - 3906.954: 0.0340% ( 3) 00:07:50.552 3906.954 - 3932.160: 0.0453% ( 2) 00:07:50.552 3932.160 - 3957.366: 0.0566% ( 2) 00:07:50.552 3957.366 - 3982.572: 0.0736% ( 3) 00:07:50.552 3982.572 - 4007.778: 0.0849% ( 2) 00:07:50.552 4007.778 - 4032.985: 0.0962% ( 2) 00:07:50.552 4032.985 - 4058.191: 0.1132% ( 3) 00:07:50.552 4058.191 - 4083.397: 0.1245% ( 2) 00:07:50.552 4083.397 - 4108.603: 0.1359% ( 2) 00:07:50.552 4108.603 - 4133.809: 0.1529% ( 3) 00:07:50.552 4133.809 - 4159.015: 0.1642% ( 2) 00:07:50.552 4159.015 - 4184.222: 0.1812% ( 3) 00:07:50.552 4184.222 - 4209.428: 0.1925% ( 2) 00:07:50.552 4209.428 - 4234.634: 0.2038% ( 2) 00:07:50.552 4234.634 - 4259.840: 0.2151% ( 2) 00:07:50.552 4259.840 - 4285.046: 0.2321% ( 3) 00:07:50.552 4285.046 - 4310.252: 0.2434% ( 2) 00:07:50.552 4310.252 - 4335.458: 0.2548% ( 2) 00:07:50.552 4335.458 - 4360.665: 0.2717% ( 3) 00:07:50.552 4360.665 - 4385.871: 0.2831% ( 2) 00:07:50.552 4385.871 - 4411.077: 0.3000% ( 3) 00:07:50.552 4411.077 - 4436.283: 0.3114% ( 2) 00:07:50.552 4436.283 - 4461.489: 0.3227% ( 2) 00:07:50.552 4461.489 - 4486.695: 0.3340% ( 2) 00:07:50.552 4486.695 - 4511.902: 0.3510% ( 3) 00:07:50.552 4511.902 - 4537.108: 0.3623% ( 2) 00:07:50.552 5847.828 - 5873.034: 0.3736% ( 2) 00:07:50.552 5873.034 - 5898.240: 0.3906% ( 3) 00:07:50.552 5898.240 - 5923.446: 0.3963% ( 1) 00:07:50.552 5923.446 - 5948.652: 0.4076% ( 2) 00:07:50.552 5948.652 - 5973.858: 0.4812% ( 13) 00:07:50.552 5973.858 - 5999.065: 0.5265% ( 8) 00:07:50.552 5999.065 - 6024.271: 0.6284% ( 18) 00:07:50.552 6024.271 - 6049.477: 0.7982% ( 30) 00:07:50.552 6049.477 - 6074.683: 1.0077% ( 37) 00:07:50.552 6074.683 - 6099.889: 1.2511% ( 43) 00:07:50.552 6099.889 - 6125.095: 1.5455% ( 52) 00:07:50.552 6125.095 - 6150.302: 1.9248% ( 67) 00:07:50.552 6150.302 - 6175.508: 2.4570% ( 94) 00:07:50.552 6175.508 - 6200.714: 3.0910% ( 112) 00:07:50.552 6200.714 - 6225.920: 3.8157% ( 128) 00:07:50.552 6225.920 - 6251.126: 4.5799% ( 135) 00:07:50.552 6251.126 - 6276.332: 5.4291% ( 150) 00:07:50.552 6276.332 - 6301.538: 6.4651% ( 183) 00:07:50.552 6301.538 - 6326.745: 7.6030% ( 201) 00:07:50.553 6326.745 - 6351.951: 8.7919% ( 210) 00:07:50.553 6351.951 - 6377.157: 10.0770% ( 227) 00:07:50.553 6377.157 - 6402.363: 11.4866% ( 249) 00:07:50.553 6402.363 - 6427.569: 12.9189% ( 253) 00:07:50.553 6427.569 - 6452.775: 14.4078% ( 263) 00:07:50.553 6452.775 - 6503.188: 17.7706% ( 594) 00:07:50.553 6503.188 - 6553.600: 21.2636% ( 617) 00:07:50.553 6553.600 - 6604.012: 25.1981% ( 695) 00:07:50.553 6604.012 - 6654.425: 29.3082% ( 726) 00:07:50.553 6654.425 - 6704.837: 33.5202% ( 744) 00:07:50.553 6704.837 - 6755.249: 37.9189% ( 777) 00:07:50.553 6755.249 - 6805.662: 42.3913% ( 790) 00:07:50.553 6805.662 - 6856.074: 46.9203% ( 800) 00:07:50.553 6856.074 - 6906.486: 51.3983% ( 791) 00:07:50.553 6906.486 - 6956.898: 55.7801% ( 774) 00:07:50.553 6956.898 - 7007.311: 59.9298% ( 733) 00:07:50.553 7007.311 - 7057.723: 63.9832% ( 716) 00:07:50.553 7057.723 - 7108.135: 67.6291% ( 644) 00:07:50.553 7108.135 - 7158.548: 70.9805% ( 592) 00:07:50.553 7158.548 - 7208.960: 73.9753% ( 529) 00:07:50.553 7208.960 - 7259.372: 76.6304% ( 469) 00:07:50.553 7259.372 - 7309.785: 78.9402% ( 408) 00:07:50.553 7309.785 - 7360.197: 80.9273% ( 351) 00:07:50.553 7360.197 - 7410.609: 82.6823% ( 310) 00:07:50.553 7410.609 - 7461.022: 84.1712% ( 263) 00:07:50.553 7461.022 - 7511.434: 85.4733% ( 230) 00:07:50.553 7511.434 - 7561.846: 86.5942% ( 198) 00:07:50.553 7561.846 - 7612.258: 87.5906% ( 176) 00:07:50.553 7612.258 - 7662.671: 88.3945% ( 142) 00:07:50.553 7662.671 - 7713.083: 89.1304% ( 130) 00:07:50.553 7713.083 - 7763.495: 89.7475% ( 109) 00:07:50.553 7763.495 - 7813.908: 90.2627% ( 91) 00:07:50.553 7813.908 - 7864.320: 90.6024% ( 60) 00:07:50.553 7864.320 - 7914.732: 90.9137% ( 55) 00:07:50.553 7914.732 - 7965.145: 91.1628% ( 44) 00:07:50.553 7965.145 - 8015.557: 91.3100% ( 26) 00:07:50.553 8015.557 - 8065.969: 91.4459% ( 24) 00:07:50.553 8065.969 - 8116.382: 91.5421% ( 17) 00:07:50.553 8116.382 - 8166.794: 91.6384% ( 17) 00:07:50.553 8166.794 - 8217.206: 91.7233% ( 15) 00:07:50.553 8217.206 - 8267.618: 91.8025% ( 14) 00:07:50.553 8267.618 - 8318.031: 91.8535% ( 9) 00:07:50.553 8318.031 - 8368.443: 91.9158% ( 11) 00:07:50.553 8368.443 - 8418.855: 91.9837% ( 12) 00:07:50.553 8418.855 - 8469.268: 92.0573% ( 13) 00:07:50.553 8469.268 - 8519.680: 92.1196% ( 11) 00:07:50.553 8519.680 - 8570.092: 92.1705% ( 9) 00:07:50.553 8570.092 - 8620.505: 92.2385% ( 12) 00:07:50.553 8620.505 - 8670.917: 92.3064% ( 12) 00:07:50.553 8670.917 - 8721.329: 92.3460% ( 7) 00:07:50.553 8721.329 - 8771.742: 92.3913% ( 8) 00:07:50.553 8771.742 - 8822.154: 92.4366% ( 8) 00:07:50.553 8822.154 - 8872.566: 92.4819% ( 8) 00:07:50.553 8872.566 - 8922.978: 92.5385% ( 10) 00:07:50.553 8922.978 - 8973.391: 92.5838% ( 8) 00:07:50.553 8973.391 - 9023.803: 92.6347% ( 9) 00:07:50.553 9023.803 - 9074.215: 92.7027% ( 12) 00:07:50.553 9074.215 - 9124.628: 92.7649% ( 11) 00:07:50.553 9124.628 - 9175.040: 92.8385% ( 13) 00:07:50.553 9175.040 - 9225.452: 92.9065% ( 12) 00:07:50.553 9225.452 - 9275.865: 92.9744% ( 12) 00:07:50.553 9275.865 - 9326.277: 93.0367% ( 11) 00:07:50.553 9326.277 - 9376.689: 93.1103% ( 13) 00:07:50.553 9376.689 - 9427.102: 93.1726% ( 11) 00:07:50.553 9427.102 - 9477.514: 93.2178% ( 8) 00:07:50.553 9477.514 - 9527.926: 93.2631% ( 8) 00:07:50.553 9527.926 - 9578.338: 93.3311% ( 12) 00:07:50.553 9578.338 - 9628.751: 93.3877% ( 10) 00:07:50.553 9628.751 - 9679.163: 93.4500% ( 11) 00:07:50.553 9679.163 - 9729.575: 93.5292% ( 14) 00:07:50.553 9729.575 - 9779.988: 93.6028% ( 13) 00:07:50.553 9779.988 - 9830.400: 93.6877% ( 15) 00:07:50.553 9830.400 - 9880.812: 93.8010% ( 20) 00:07:50.553 9880.812 - 9931.225: 93.8972% ( 17) 00:07:50.553 9931.225 - 9981.637: 93.9878% ( 16) 00:07:50.553 9981.637 - 10032.049: 94.0897% ( 18) 00:07:50.553 10032.049 - 10082.462: 94.2029% ( 20) 00:07:50.553 10082.462 - 10132.874: 94.3048% ( 18) 00:07:50.553 10132.874 - 10183.286: 94.4407% ( 24) 00:07:50.553 10183.286 - 10233.698: 94.5992% ( 28) 00:07:50.553 10233.698 - 10284.111: 94.7181% ( 21) 00:07:50.553 10284.111 - 10334.523: 94.8596% ( 25) 00:07:50.553 10334.523 - 10384.935: 95.0125% ( 27) 00:07:50.553 10384.935 - 10435.348: 95.1653% ( 27) 00:07:50.553 10435.348 - 10485.760: 95.3068% ( 25) 00:07:50.553 10485.760 - 10536.172: 95.4370% ( 23) 00:07:50.553 10536.172 - 10586.585: 95.5842% ( 26) 00:07:50.553 10586.585 - 10636.997: 95.7201% ( 24) 00:07:50.553 10636.997 - 10687.409: 95.8333% ( 20) 00:07:50.553 10687.409 - 10737.822: 95.9409% ( 19) 00:07:50.553 10737.822 - 10788.234: 96.0541% ( 20) 00:07:50.553 10788.234 - 10838.646: 96.1617% ( 19) 00:07:50.553 10838.646 - 10889.058: 96.2692% ( 19) 00:07:50.553 10889.058 - 10939.471: 96.3768% ( 19) 00:07:50.553 10939.471 - 10989.883: 96.4787% ( 18) 00:07:50.553 10989.883 - 11040.295: 96.6033% ( 22) 00:07:50.553 11040.295 - 11090.708: 96.6995% ( 17) 00:07:50.553 11090.708 - 11141.120: 96.7788% ( 14) 00:07:50.553 11141.120 - 11191.532: 96.8637% ( 15) 00:07:50.553 11191.532 - 11241.945: 96.9429% ( 14) 00:07:50.553 11241.945 - 11292.357: 97.0279% ( 15) 00:07:50.553 11292.357 - 11342.769: 97.1071% ( 14) 00:07:50.553 11342.769 - 11393.182: 97.1864% ( 14) 00:07:50.553 11393.182 - 11443.594: 97.2600% ( 13) 00:07:50.553 11443.594 - 11494.006: 97.3336% ( 13) 00:07:50.553 11494.006 - 11544.418: 97.3958% ( 11) 00:07:50.553 11544.418 - 11594.831: 97.4524% ( 10) 00:07:50.553 11594.831 - 11645.243: 97.4921% ( 7) 00:07:50.553 11645.243 - 11695.655: 97.5374% ( 8) 00:07:50.553 11695.655 - 11746.068: 97.5713% ( 6) 00:07:50.553 11746.068 - 11796.480: 97.5940% ( 4) 00:07:50.553 11796.480 - 11846.892: 97.6279% ( 6) 00:07:50.553 11846.892 - 11897.305: 97.6676% ( 7) 00:07:50.553 11897.305 - 11947.717: 97.7015% ( 6) 00:07:50.553 11947.717 - 11998.129: 97.7468% ( 8) 00:07:50.553 11998.129 - 12048.542: 97.7978% ( 9) 00:07:50.553 12048.542 - 12098.954: 97.8374% ( 7) 00:07:50.553 12098.954 - 12149.366: 97.8714% ( 6) 00:07:50.553 12149.366 - 12199.778: 97.9110% ( 7) 00:07:50.553 12199.778 - 12250.191: 97.9450% ( 6) 00:07:50.553 12250.191 - 12300.603: 97.9733% ( 5) 00:07:50.553 12300.603 - 12351.015: 98.0072% ( 6) 00:07:50.553 12351.015 - 12401.428: 98.0299% ( 4) 00:07:50.553 12401.428 - 12451.840: 98.0752% ( 8) 00:07:50.553 12451.840 - 12502.252: 98.1091% ( 6) 00:07:50.553 12502.252 - 12552.665: 98.1544% ( 8) 00:07:50.553 12552.665 - 12603.077: 98.1941% ( 7) 00:07:50.553 12603.077 - 12653.489: 98.2507% ( 10) 00:07:50.553 12653.489 - 12703.902: 98.2960% ( 8) 00:07:50.553 12703.902 - 12754.314: 98.3243% ( 5) 00:07:50.553 12754.314 - 12804.726: 98.3582% ( 6) 00:07:50.553 12804.726 - 12855.138: 98.3922% ( 6) 00:07:50.553 12855.138 - 12905.551: 98.4262% ( 6) 00:07:50.553 12905.551 - 13006.375: 98.4828% ( 10) 00:07:50.553 13006.375 - 13107.200: 98.5394% ( 10) 00:07:50.553 13107.200 - 13208.025: 98.6017% ( 11) 00:07:50.553 13208.025 - 13308.849: 98.6696% ( 12) 00:07:50.553 13308.849 - 13409.674: 98.7545% ( 15) 00:07:50.553 13409.674 - 13510.498: 98.8281% ( 13) 00:07:50.553 13510.498 - 13611.323: 98.9074% ( 14) 00:07:50.553 13611.323 - 13712.148: 98.9923% ( 15) 00:07:50.553 13712.148 - 13812.972: 99.0489% ( 10) 00:07:50.553 13812.972 - 13913.797: 99.0716% ( 4) 00:07:50.553 13913.797 - 14014.622: 99.0885% ( 3) 00:07:50.553 14014.622 - 14115.446: 99.1282% ( 7) 00:07:50.553 14115.446 - 14216.271: 99.1735% ( 8) 00:07:50.553 14216.271 - 14317.095: 99.2131% ( 7) 00:07:50.553 14317.095 - 14417.920: 99.2584% ( 8) 00:07:50.553 14417.920 - 14518.745: 99.2754% ( 3) 00:07:50.553 15728.640 - 15829.465: 99.3093% ( 6) 00:07:50.553 15829.465 - 15930.289: 99.3376% ( 5) 00:07:50.553 15930.289 - 16031.114: 99.3659% ( 5) 00:07:50.553 16031.114 - 16131.938: 99.3942% ( 5) 00:07:50.553 16131.938 - 16232.763: 99.4339% ( 7) 00:07:50.553 16232.763 - 16333.588: 99.4678% ( 6) 00:07:50.553 16333.588 - 16434.412: 99.4962% ( 5) 00:07:50.553 16434.412 - 16535.237: 99.5301% ( 6) 00:07:50.554 16535.237 - 16636.062: 99.5641% ( 6) 00:07:50.554 16636.062 - 16736.886: 99.5981% ( 6) 00:07:50.554 16736.886 - 16837.711: 99.6320% ( 6) 00:07:50.554 16837.711 - 16938.535: 99.6377% ( 1) 00:07:50.554 23290.486 - 23391.311: 99.6433% ( 1) 00:07:50.554 23391.311 - 23492.135: 99.6603% ( 3) 00:07:50.554 23492.135 - 23592.960: 99.6773% ( 3) 00:07:50.554 23592.960 - 23693.785: 99.6943% ( 3) 00:07:50.554 23693.785 - 23794.609: 99.7169% ( 4) 00:07:50.554 23794.609 - 23895.434: 99.7396% ( 4) 00:07:50.554 23895.434 - 23996.258: 99.7622% ( 4) 00:07:50.554 23996.258 - 24097.083: 99.7792% ( 3) 00:07:50.554 24097.083 - 24197.908: 99.8019% ( 4) 00:07:50.554 24197.908 - 24298.732: 99.8245% ( 4) 00:07:50.554 24298.732 - 24399.557: 99.8471% ( 4) 00:07:50.554 24399.557 - 24500.382: 99.8641% ( 3) 00:07:50.554 24500.382 - 24601.206: 99.8868% ( 4) 00:07:50.554 24601.206 - 24702.031: 99.9094% ( 4) 00:07:50.554 24702.031 - 24802.855: 99.9264% ( 3) 00:07:50.554 24802.855 - 24903.680: 99.9490% ( 4) 00:07:50.554 24903.680 - 25004.505: 99.9717% ( 4) 00:07:50.554 25004.505 - 25105.329: 99.9887% ( 3) 00:07:50.554 25105.329 - 25206.154: 100.0000% ( 2) 00:07:50.554 00:07:50.554 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.554 ============================================================================== 00:07:50.554 Range in us Cumulative IO count 00:07:50.554 3554.068 - 3579.274: 0.0226% ( 4) 00:07:50.554 3579.274 - 3604.480: 0.0453% ( 4) 00:07:50.554 3604.480 - 3629.686: 0.0510% ( 1) 00:07:50.554 3629.686 - 3654.892: 0.0679% ( 3) 00:07:50.554 3654.892 - 3680.098: 0.0793% ( 2) 00:07:50.554 3680.098 - 3705.305: 0.0962% ( 3) 00:07:50.554 3705.305 - 3730.511: 0.1076% ( 2) 00:07:50.554 3730.511 - 3755.717: 0.1189% ( 2) 00:07:50.554 3755.717 - 3780.923: 0.1359% ( 3) 00:07:50.554 3780.923 - 3806.129: 0.1472% ( 2) 00:07:50.554 3806.129 - 3831.335: 0.1585% ( 2) 00:07:50.554 3831.335 - 3856.542: 0.1755% ( 3) 00:07:50.554 3856.542 - 3881.748: 0.1812% ( 1) 00:07:50.554 3881.748 - 3906.954: 0.1981% ( 3) 00:07:50.554 3906.954 - 3932.160: 0.2095% ( 2) 00:07:50.554 3932.160 - 3957.366: 0.2208% ( 2) 00:07:50.554 3957.366 - 3982.572: 0.2378% ( 3) 00:07:50.554 3982.572 - 4007.778: 0.2491% ( 2) 00:07:50.554 4007.778 - 4032.985: 0.2604% ( 2) 00:07:50.554 4032.985 - 4058.191: 0.2774% ( 3) 00:07:50.554 4058.191 - 4083.397: 0.2831% ( 1) 00:07:50.554 4083.397 - 4108.603: 0.3000% ( 3) 00:07:50.554 4108.603 - 4133.809: 0.3114% ( 2) 00:07:50.554 4133.809 - 4159.015: 0.3227% ( 2) 00:07:50.554 4159.015 - 4184.222: 0.3340% ( 2) 00:07:50.554 4184.222 - 4209.428: 0.3453% ( 2) 00:07:50.554 4209.428 - 4234.634: 0.3567% ( 2) 00:07:50.554 4234.634 - 4259.840: 0.3623% ( 1) 00:07:50.554 5747.003 - 5772.209: 0.3793% ( 3) 00:07:50.554 5772.209 - 5797.415: 0.3963% ( 3) 00:07:50.554 5797.415 - 5822.622: 0.4133% ( 3) 00:07:50.554 5822.622 - 5847.828: 0.4189% ( 1) 00:07:50.554 5847.828 - 5873.034: 0.4359% ( 3) 00:07:50.554 5873.034 - 5898.240: 0.4472% ( 2) 00:07:50.554 5898.240 - 5923.446: 0.4642% ( 3) 00:07:50.554 5923.446 - 5948.652: 0.4755% ( 2) 00:07:50.554 5948.652 - 5973.858: 0.5605% ( 15) 00:07:50.554 5973.858 - 5999.065: 0.5944% ( 6) 00:07:50.554 5999.065 - 6024.271: 0.6567% ( 11) 00:07:50.554 6024.271 - 6049.477: 0.8039% ( 26) 00:07:50.554 6049.477 - 6074.683: 0.9737% ( 30) 00:07:50.554 6074.683 - 6099.889: 1.2511% ( 49) 00:07:50.554 6099.889 - 6125.095: 1.6644% ( 73) 00:07:50.554 6125.095 - 6150.302: 2.2079% ( 96) 00:07:50.554 6150.302 - 6175.508: 2.7740% ( 100) 00:07:50.554 6175.508 - 6200.714: 3.3118% ( 95) 00:07:50.554 6200.714 - 6225.920: 3.9515% ( 113) 00:07:50.554 6225.920 - 6251.126: 4.6875% ( 130) 00:07:50.554 6251.126 - 6276.332: 5.6669% ( 173) 00:07:50.554 6276.332 - 6301.538: 6.7312% ( 188) 00:07:50.554 6301.538 - 6326.745: 7.8465% ( 197) 00:07:50.554 6326.745 - 6351.951: 9.0183% ( 207) 00:07:50.554 6351.951 - 6377.157: 10.3544% ( 236) 00:07:50.554 6377.157 - 6402.363: 11.5376% ( 209) 00:07:50.554 6402.363 - 6427.569: 12.9189% ( 244) 00:07:50.554 6427.569 - 6452.775: 14.3512% ( 253) 00:07:50.554 6452.775 - 6503.188: 17.5611% ( 567) 00:07:50.554 6503.188 - 6553.600: 21.1051% ( 626) 00:07:50.554 6553.600 - 6604.012: 25.0623% ( 699) 00:07:50.554 6604.012 - 6654.425: 29.0931% ( 712) 00:07:50.554 6654.425 - 6704.837: 33.2937% ( 742) 00:07:50.554 6704.837 - 6755.249: 37.6925% ( 777) 00:07:50.554 6755.249 - 6805.662: 42.2158% ( 799) 00:07:50.554 6805.662 - 6856.074: 46.7221% ( 796) 00:07:50.554 6856.074 - 6906.486: 51.2794% ( 805) 00:07:50.554 6906.486 - 6956.898: 55.6216% ( 767) 00:07:50.554 6956.898 - 7007.311: 59.6694% ( 715) 00:07:50.554 7007.311 - 7057.723: 63.6549% ( 704) 00:07:50.554 7057.723 - 7108.135: 67.3120% ( 646) 00:07:50.554 7108.135 - 7158.548: 70.6295% ( 586) 00:07:50.554 7158.548 - 7208.960: 73.6187% ( 528) 00:07:50.554 7208.960 - 7259.372: 76.3361% ( 480) 00:07:50.554 7259.372 - 7309.785: 78.7194% ( 421) 00:07:50.554 7309.785 - 7360.197: 80.7971% ( 367) 00:07:50.554 7360.197 - 7410.609: 82.5691% ( 313) 00:07:50.554 7410.609 - 7461.022: 84.0580% ( 263) 00:07:50.554 7461.022 - 7511.434: 85.3657% ( 231) 00:07:50.554 7511.434 - 7561.846: 86.5206% ( 204) 00:07:50.554 7561.846 - 7612.258: 87.4717% ( 168) 00:07:50.554 7612.258 - 7662.671: 88.2812% ( 143) 00:07:50.554 7662.671 - 7713.083: 88.9549% ( 119) 00:07:50.554 7713.083 - 7763.495: 89.5663% ( 108) 00:07:50.554 7763.495 - 7813.908: 90.0985% ( 94) 00:07:50.554 7813.908 - 7864.320: 90.4665% ( 65) 00:07:50.554 7864.320 - 7914.732: 90.7948% ( 58) 00:07:50.554 7914.732 - 7965.145: 91.0609% ( 47) 00:07:50.554 7965.145 - 8015.557: 91.2760% ( 38) 00:07:50.554 8015.557 - 8065.969: 91.4402% ( 29) 00:07:50.554 8065.969 - 8116.382: 91.5931% ( 27) 00:07:50.554 8116.382 - 8166.794: 91.6950% ( 18) 00:07:50.554 8166.794 - 8217.206: 91.7742% ( 14) 00:07:50.554 8217.206 - 8267.618: 91.8648% ( 16) 00:07:50.554 8267.618 - 8318.031: 91.9441% ( 14) 00:07:50.554 8318.031 - 8368.443: 92.0063% ( 11) 00:07:50.554 8368.443 - 8418.855: 92.0630% ( 10) 00:07:50.554 8418.855 - 8469.268: 92.1365% ( 13) 00:07:50.554 8469.268 - 8519.680: 92.1932% ( 10) 00:07:50.554 8519.680 - 8570.092: 92.2441% ( 9) 00:07:50.554 8570.092 - 8620.505: 92.2951% ( 9) 00:07:50.554 8620.505 - 8670.917: 92.3517% ( 10) 00:07:50.554 8670.917 - 8721.329: 92.4083% ( 10) 00:07:50.554 8721.329 - 8771.742: 92.4536% ( 8) 00:07:50.554 8771.742 - 8822.154: 92.4932% ( 7) 00:07:50.554 8822.154 - 8872.566: 92.5159% ( 4) 00:07:50.554 8872.566 - 8922.978: 92.5781% ( 11) 00:07:50.554 8922.978 - 8973.391: 92.6234% ( 8) 00:07:50.554 8973.391 - 9023.803: 92.6630% ( 7) 00:07:50.554 9023.803 - 9074.215: 92.7083% ( 8) 00:07:50.554 9074.215 - 9124.628: 92.7480% ( 7) 00:07:50.554 9124.628 - 9175.040: 92.8159% ( 12) 00:07:50.554 9175.040 - 9225.452: 92.8725% ( 10) 00:07:50.554 9225.452 - 9275.865: 92.9574% ( 15) 00:07:50.554 9275.865 - 9326.277: 93.0423% ( 15) 00:07:50.554 9326.277 - 9376.689: 93.1046% ( 11) 00:07:50.554 9376.689 - 9427.102: 93.2235% ( 21) 00:07:50.554 9427.102 - 9477.514: 93.3084% ( 15) 00:07:50.554 9477.514 - 9527.926: 93.4160% ( 19) 00:07:50.554 9527.926 - 9578.338: 93.5009% ( 15) 00:07:50.554 9578.338 - 9628.751: 93.6028% ( 18) 00:07:50.554 9628.751 - 9679.163: 93.6877% ( 15) 00:07:50.554 9679.163 - 9729.575: 93.7840% ( 17) 00:07:50.554 9729.575 - 9779.988: 93.8745% ( 16) 00:07:50.554 9779.988 - 9830.400: 93.9821% ( 19) 00:07:50.554 9830.400 - 9880.812: 94.0897% ( 19) 00:07:50.554 9880.812 - 9931.225: 94.2199% ( 23) 00:07:50.554 9931.225 - 9981.637: 94.3444% ( 22) 00:07:50.554 9981.637 - 10032.049: 94.5086% ( 29) 00:07:50.554 10032.049 - 10082.462: 94.6275% ( 21) 00:07:50.554 10082.462 - 10132.874: 94.7407% ( 20) 00:07:50.554 10132.874 - 10183.286: 94.8653% ( 22) 00:07:50.554 10183.286 - 10233.698: 94.9955% ( 23) 00:07:50.554 10233.698 - 10284.111: 95.1030% ( 19) 00:07:50.554 10284.111 - 10334.523: 95.1993% ( 17) 00:07:50.554 10334.523 - 10384.935: 95.3068% ( 19) 00:07:50.554 10384.935 - 10435.348: 95.4257% ( 21) 00:07:50.554 10435.348 - 10485.760: 95.5220% ( 17) 00:07:50.554 10485.760 - 10536.172: 95.6352% ( 20) 00:07:50.554 10536.172 - 10586.585: 95.7428% ( 19) 00:07:50.554 10586.585 - 10636.997: 95.8616% ( 21) 00:07:50.554 10636.997 - 10687.409: 95.9749% ( 20) 00:07:50.554 10687.409 - 10737.822: 96.0768% ( 18) 00:07:50.554 10737.822 - 10788.234: 96.1900% ( 20) 00:07:50.554 10788.234 - 10838.646: 96.2806% ( 16) 00:07:50.554 10838.646 - 10889.058: 96.3712% ( 16) 00:07:50.554 10889.058 - 10939.471: 96.4674% ( 17) 00:07:50.554 10939.471 - 10989.883: 96.5806% ( 20) 00:07:50.554 10989.883 - 11040.295: 96.6882% ( 19) 00:07:50.554 11040.295 - 11090.708: 96.7901% ( 18) 00:07:50.554 11090.708 - 11141.120: 96.8863% ( 17) 00:07:50.554 11141.120 - 11191.532: 96.9939% ( 19) 00:07:50.554 11191.532 - 11241.945: 97.0788% ( 15) 00:07:50.554 11241.945 - 11292.357: 97.1524% ( 13) 00:07:50.554 11292.357 - 11342.769: 97.2147% ( 11) 00:07:50.554 11342.769 - 11393.182: 97.2769% ( 11) 00:07:50.554 11393.182 - 11443.594: 97.3505% ( 13) 00:07:50.554 11443.594 - 11494.006: 97.4128% ( 11) 00:07:50.554 11494.006 - 11544.418: 97.4751% ( 11) 00:07:50.554 11544.418 - 11594.831: 97.5204% ( 8) 00:07:50.554 11594.831 - 11645.243: 97.5770% ( 10) 00:07:50.554 11645.243 - 11695.655: 97.6279% ( 9) 00:07:50.554 11695.655 - 11746.068: 97.6789% ( 9) 00:07:50.554 11746.068 - 11796.480: 97.7355% ( 10) 00:07:50.554 11796.480 - 11846.892: 97.7865% ( 9) 00:07:50.554 11846.892 - 11897.305: 97.8204% ( 6) 00:07:50.555 11897.305 - 11947.717: 97.8601% ( 7) 00:07:50.555 11947.717 - 11998.129: 97.9053% ( 8) 00:07:50.555 11998.129 - 12048.542: 97.9223% ( 3) 00:07:50.555 12048.542 - 12098.954: 97.9506% ( 5) 00:07:50.555 12098.954 - 12149.366: 97.9959% ( 8) 00:07:50.555 12149.366 - 12199.778: 98.0299% ( 6) 00:07:50.555 12199.778 - 12250.191: 98.0582% ( 5) 00:07:50.555 12250.191 - 12300.603: 98.0922% ( 6) 00:07:50.555 12300.603 - 12351.015: 98.1148% ( 4) 00:07:50.555 12351.015 - 12401.428: 98.1488% ( 6) 00:07:50.555 12401.428 - 12451.840: 98.1771% ( 5) 00:07:50.555 12451.840 - 12502.252: 98.2054% ( 5) 00:07:50.555 12502.252 - 12552.665: 98.2337% ( 5) 00:07:50.555 12552.665 - 12603.077: 98.2620% ( 5) 00:07:50.555 12603.077 - 12653.489: 98.2903% ( 5) 00:07:50.555 12653.489 - 12703.902: 98.3243% ( 6) 00:07:50.555 12703.902 - 12754.314: 98.3526% ( 5) 00:07:50.555 12754.314 - 12804.726: 98.3865% ( 6) 00:07:50.555 12804.726 - 12855.138: 98.4092% ( 4) 00:07:50.555 12855.138 - 12905.551: 98.4375% ( 5) 00:07:50.555 12905.551 - 13006.375: 98.4828% ( 8) 00:07:50.555 13006.375 - 13107.200: 98.5281% ( 8) 00:07:50.555 13107.200 - 13208.025: 98.5790% ( 9) 00:07:50.555 13208.025 - 13308.849: 98.6243% ( 8) 00:07:50.555 13308.849 - 13409.674: 98.6753% ( 9) 00:07:50.555 13409.674 - 13510.498: 98.7432% ( 12) 00:07:50.555 13510.498 - 13611.323: 98.8394% ( 17) 00:07:50.555 13611.323 - 13712.148: 98.9300% ( 16) 00:07:50.555 13712.148 - 13812.972: 99.0206% ( 16) 00:07:50.555 13812.972 - 13913.797: 99.0942% ( 13) 00:07:50.555 13913.797 - 14014.622: 99.1452% ( 9) 00:07:50.555 14014.622 - 14115.446: 99.1848% ( 7) 00:07:50.555 14115.446 - 14216.271: 99.2301% ( 8) 00:07:50.555 14216.271 - 14317.095: 99.2754% ( 8) 00:07:50.555 15325.342 - 15426.166: 99.2980% ( 4) 00:07:50.555 15426.166 - 15526.991: 99.3320% ( 6) 00:07:50.555 15526.991 - 15627.815: 99.3716% ( 7) 00:07:50.555 15627.815 - 15728.640: 99.4056% ( 6) 00:07:50.555 15728.640 - 15829.465: 99.4339% ( 5) 00:07:50.555 15829.465 - 15930.289: 99.4678% ( 6) 00:07:50.555 15930.289 - 16031.114: 99.5018% ( 6) 00:07:50.555 16031.114 - 16131.938: 99.5301% ( 5) 00:07:50.555 16131.938 - 16232.763: 99.5641% ( 6) 00:07:50.555 16232.763 - 16333.588: 99.5981% ( 6) 00:07:50.555 16333.588 - 16434.412: 99.6320% ( 6) 00:07:50.555 16434.412 - 16535.237: 99.6377% ( 1) 00:07:50.555 23492.135 - 23592.960: 99.6490% ( 2) 00:07:50.555 23592.960 - 23693.785: 99.6660% ( 3) 00:07:50.555 23693.785 - 23794.609: 99.6886% ( 4) 00:07:50.555 23794.609 - 23895.434: 99.7056% ( 3) 00:07:50.555 23895.434 - 23996.258: 99.7283% ( 4) 00:07:50.555 23996.258 - 24097.083: 99.7509% ( 4) 00:07:50.555 24097.083 - 24197.908: 99.7679% ( 3) 00:07:50.555 24197.908 - 24298.732: 99.7905% ( 4) 00:07:50.555 24298.732 - 24399.557: 99.8132% ( 4) 00:07:50.555 24399.557 - 24500.382: 99.8358% ( 4) 00:07:50.555 24500.382 - 24601.206: 99.8528% ( 3) 00:07:50.555 24601.206 - 24702.031: 99.8755% ( 4) 00:07:50.555 24702.031 - 24802.855: 99.8924% ( 3) 00:07:50.555 24802.855 - 24903.680: 99.9151% ( 4) 00:07:50.555 24903.680 - 25004.505: 99.9377% ( 4) 00:07:50.555 25004.505 - 25105.329: 99.9604% ( 4) 00:07:50.555 25105.329 - 25206.154: 99.9774% ( 3) 00:07:50.555 25206.154 - 25306.978: 100.0000% ( 4) 00:07:50.555 00:07:50.555 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.555 ============================================================================== 00:07:50.555 Range in us Cumulative IO count 00:07:50.555 3201.182 - 3213.785: 0.0057% ( 1) 00:07:50.555 3213.785 - 3226.388: 0.0170% ( 2) 00:07:50.555 3226.388 - 3251.594: 0.0226% ( 1) 00:07:50.555 3251.594 - 3276.800: 0.0396% ( 3) 00:07:50.555 3276.800 - 3302.006: 0.0510% ( 2) 00:07:50.555 3302.006 - 3327.212: 0.0679% ( 3) 00:07:50.555 3327.212 - 3352.418: 0.0849% ( 3) 00:07:50.555 3352.418 - 3377.625: 0.0962% ( 2) 00:07:50.555 3377.625 - 3402.831: 0.1076% ( 2) 00:07:50.555 3402.831 - 3428.037: 0.1245% ( 3) 00:07:50.555 3428.037 - 3453.243: 0.1359% ( 2) 00:07:50.555 3453.243 - 3478.449: 0.1472% ( 2) 00:07:50.555 3478.449 - 3503.655: 0.1585% ( 2) 00:07:50.555 3503.655 - 3528.862: 0.1698% ( 2) 00:07:50.555 3528.862 - 3554.068: 0.1812% ( 2) 00:07:50.555 3554.068 - 3579.274: 0.1925% ( 2) 00:07:50.555 3579.274 - 3604.480: 0.2095% ( 3) 00:07:50.555 3604.480 - 3629.686: 0.2208% ( 2) 00:07:50.555 3629.686 - 3654.892: 0.2321% ( 2) 00:07:50.555 3654.892 - 3680.098: 0.2491% ( 3) 00:07:50.555 3680.098 - 3705.305: 0.2604% ( 2) 00:07:50.555 3705.305 - 3730.511: 0.2774% ( 3) 00:07:50.555 3730.511 - 3755.717: 0.2887% ( 2) 00:07:50.555 3755.717 - 3780.923: 0.3000% ( 2) 00:07:50.555 3780.923 - 3806.129: 0.3114% ( 2) 00:07:50.555 3806.129 - 3831.335: 0.3227% ( 2) 00:07:50.555 3831.335 - 3856.542: 0.3340% ( 2) 00:07:50.555 3856.542 - 3881.748: 0.3453% ( 2) 00:07:50.555 3881.748 - 3906.954: 0.3567% ( 2) 00:07:50.555 3906.954 - 3932.160: 0.3623% ( 1) 00:07:50.555 5394.117 - 5419.323: 0.3736% ( 2) 00:07:50.555 5419.323 - 5444.529: 0.3850% ( 2) 00:07:50.555 5444.529 - 5469.735: 0.3963% ( 2) 00:07:50.555 5469.735 - 5494.942: 0.4076% ( 2) 00:07:50.555 5494.942 - 5520.148: 0.4246% ( 3) 00:07:50.555 5520.148 - 5545.354: 0.4416% ( 3) 00:07:50.555 5545.354 - 5570.560: 0.4529% ( 2) 00:07:50.555 5570.560 - 5595.766: 0.4699% ( 3) 00:07:50.555 5595.766 - 5620.972: 0.4812% ( 2) 00:07:50.555 5620.972 - 5646.178: 0.4982% ( 3) 00:07:50.555 5646.178 - 5671.385: 0.5095% ( 2) 00:07:50.555 5671.385 - 5696.591: 0.5208% ( 2) 00:07:50.555 5696.591 - 5721.797: 0.5378% ( 3) 00:07:50.555 5721.797 - 5747.003: 0.5491% ( 2) 00:07:50.555 5747.003 - 5772.209: 0.5605% ( 2) 00:07:50.555 5772.209 - 5797.415: 0.5774% ( 3) 00:07:50.555 5797.415 - 5822.622: 0.5888% ( 2) 00:07:50.555 5822.622 - 5847.828: 0.6058% ( 3) 00:07:50.555 5847.828 - 5873.034: 0.6171% ( 2) 00:07:50.555 5873.034 - 5898.240: 0.6341% ( 3) 00:07:50.555 5898.240 - 5923.446: 0.6454% ( 2) 00:07:50.555 5923.446 - 5948.652: 0.6567% ( 2) 00:07:50.555 5948.652 - 5973.858: 0.7020% ( 8) 00:07:50.555 5973.858 - 5999.065: 0.7926% ( 16) 00:07:50.555 5999.065 - 6024.271: 0.8492% ( 10) 00:07:50.555 6024.271 - 6049.477: 0.9907% ( 25) 00:07:50.555 6049.477 - 6074.683: 1.1775% ( 33) 00:07:50.555 6074.683 - 6099.889: 1.5568% ( 67) 00:07:50.555 6099.889 - 6125.095: 1.9475% ( 69) 00:07:50.555 6125.095 - 6150.302: 2.3324% ( 68) 00:07:50.555 6150.302 - 6175.508: 2.8250% ( 87) 00:07:50.555 6175.508 - 6200.714: 3.4194% ( 105) 00:07:50.555 6200.714 - 6225.920: 4.1214% ( 124) 00:07:50.555 6225.920 - 6251.126: 5.0102% ( 157) 00:07:50.555 6251.126 - 6276.332: 5.9669% ( 169) 00:07:50.555 6276.332 - 6301.538: 6.9577% ( 175) 00:07:50.555 6301.538 - 6326.745: 8.0673% ( 196) 00:07:50.555 6326.745 - 6351.951: 9.3467% ( 226) 00:07:50.555 6351.951 - 6377.157: 10.5469% ( 212) 00:07:50.555 6377.157 - 6402.363: 11.8773% ( 235) 00:07:50.555 6402.363 - 6427.569: 13.3548% ( 261) 00:07:50.555 6427.569 - 6452.775: 14.8324% ( 261) 00:07:50.555 6452.775 - 6503.188: 18.0027% ( 560) 00:07:50.555 6503.188 - 6553.600: 21.3995% ( 600) 00:07:50.555 6553.600 - 6604.012: 25.3170% ( 692) 00:07:50.555 6604.012 - 6654.425: 29.3422% ( 711) 00:07:50.555 6654.425 - 6704.837: 33.7296% ( 775) 00:07:50.555 6704.837 - 6755.249: 38.0944% ( 771) 00:07:50.555 6755.249 - 6805.662: 42.4819% ( 775) 00:07:50.555 6805.662 - 6856.074: 47.0505% ( 807) 00:07:50.555 6856.074 - 6906.486: 51.5399% ( 793) 00:07:50.555 6906.486 - 6956.898: 55.9273% ( 775) 00:07:50.555 6956.898 - 7007.311: 59.9468% ( 710) 00:07:50.555 7007.311 - 7057.723: 63.7908% ( 679) 00:07:50.555 7057.723 - 7108.135: 67.2781% ( 616) 00:07:50.555 7108.135 - 7158.548: 70.4767% ( 565) 00:07:50.555 7158.548 - 7208.960: 73.2507% ( 490) 00:07:50.555 7208.960 - 7259.372: 75.8322% ( 456) 00:07:50.555 7259.372 - 7309.785: 78.1307% ( 406) 00:07:50.555 7309.785 - 7360.197: 80.2649% ( 377) 00:07:50.555 7360.197 - 7410.609: 82.0596% ( 317) 00:07:50.555 7410.609 - 7461.022: 83.5202% ( 258) 00:07:50.555 7461.022 - 7511.434: 84.7317% ( 214) 00:07:50.555 7511.434 - 7561.846: 85.7394% ( 178) 00:07:50.555 7561.846 - 7612.258: 86.6338% ( 158) 00:07:50.555 7612.258 - 7662.671: 87.5226% ( 157) 00:07:50.555 7662.671 - 7713.083: 88.3152% ( 140) 00:07:50.555 7713.083 - 7763.495: 88.8983% ( 103) 00:07:50.555 7763.495 - 7813.908: 89.3569% ( 81) 00:07:50.555 7813.908 - 7864.320: 89.7702% ( 73) 00:07:50.555 7864.320 - 7914.732: 90.1551% ( 68) 00:07:50.555 7914.732 - 7965.145: 90.5118% ( 63) 00:07:50.555 7965.145 - 8015.557: 90.8175% ( 54) 00:07:50.555 8015.557 - 8065.969: 91.0892% ( 48) 00:07:50.555 8065.969 - 8116.382: 91.2930% ( 36) 00:07:50.555 8116.382 - 8166.794: 91.4855% ( 34) 00:07:50.555 8166.794 - 8217.206: 91.6553% ( 30) 00:07:50.555 8217.206 - 8267.618: 91.8082% ( 27) 00:07:50.555 8267.618 - 8318.031: 91.9214% ( 20) 00:07:50.555 8318.031 - 8368.443: 92.0516% ( 23) 00:07:50.555 8368.443 - 8418.855: 92.1818% ( 23) 00:07:50.555 8418.855 - 8469.268: 92.2951% ( 20) 00:07:50.555 8469.268 - 8519.680: 92.3743% ( 14) 00:07:50.555 8519.680 - 8570.092: 92.4706% ( 17) 00:07:50.555 8570.092 - 8620.505: 92.5555% ( 15) 00:07:50.555 8620.505 - 8670.917: 92.6404% ( 15) 00:07:50.555 8670.917 - 8721.329: 92.7253% ( 15) 00:07:50.555 8721.329 - 8771.742: 92.7933% ( 12) 00:07:50.555 8771.742 - 8822.154: 92.8385% ( 8) 00:07:50.555 8822.154 - 8872.566: 92.8725% ( 6) 00:07:50.555 8872.566 - 8922.978: 92.9065% ( 6) 00:07:50.555 8922.978 - 8973.391: 92.9461% ( 7) 00:07:50.555 8973.391 - 9023.803: 92.9688% ( 4) 00:07:50.555 9023.803 - 9074.215: 93.0027% ( 6) 00:07:50.555 9074.215 - 9124.628: 93.0537% ( 9) 00:07:50.555 9124.628 - 9175.040: 93.1159% ( 11) 00:07:50.555 9175.040 - 9225.452: 93.1895% ( 13) 00:07:50.555 9225.452 - 9275.865: 93.2688% ( 14) 00:07:50.556 9275.865 - 9326.277: 93.3594% ( 16) 00:07:50.556 9326.277 - 9376.689: 93.4386% ( 14) 00:07:50.556 9376.689 - 9427.102: 93.5122% ( 13) 00:07:50.556 9427.102 - 9477.514: 93.5971% ( 15) 00:07:50.556 9477.514 - 9527.926: 93.6764% ( 14) 00:07:50.556 9527.926 - 9578.338: 93.7726% ( 17) 00:07:50.556 9578.338 - 9628.751: 93.8576% ( 15) 00:07:50.556 9628.751 - 9679.163: 93.9481% ( 16) 00:07:50.556 9679.163 - 9729.575: 94.0500% ( 18) 00:07:50.556 9729.575 - 9779.988: 94.1633% ( 20) 00:07:50.556 9779.988 - 9830.400: 94.2991% ( 24) 00:07:50.556 9830.400 - 9880.812: 94.4293% ( 23) 00:07:50.556 9880.812 - 9931.225: 94.5709% ( 25) 00:07:50.556 9931.225 - 9981.637: 94.7181% ( 26) 00:07:50.556 9981.637 - 10032.049: 94.8370% ( 21) 00:07:50.556 10032.049 - 10082.462: 94.9615% ( 22) 00:07:50.556 10082.462 - 10132.874: 95.0691% ( 19) 00:07:50.556 10132.874 - 10183.286: 95.1766% ( 19) 00:07:50.556 10183.286 - 10233.698: 95.2785% ( 18) 00:07:50.556 10233.698 - 10284.111: 95.3804% ( 18) 00:07:50.556 10284.111 - 10334.523: 95.4767% ( 17) 00:07:50.556 10334.523 - 10384.935: 95.5786% ( 18) 00:07:50.556 10384.935 - 10435.348: 95.6635% ( 15) 00:07:50.556 10435.348 - 10485.760: 95.7767% ( 20) 00:07:50.556 10485.760 - 10536.172: 95.8843% ( 19) 00:07:50.556 10536.172 - 10586.585: 95.9635% ( 14) 00:07:50.556 10586.585 - 10636.997: 96.0541% ( 16) 00:07:50.556 10636.997 - 10687.409: 96.1390% ( 15) 00:07:50.556 10687.409 - 10737.822: 96.2240% ( 15) 00:07:50.556 10737.822 - 10788.234: 96.2976% ( 13) 00:07:50.556 10788.234 - 10838.646: 96.3598% ( 11) 00:07:50.556 10838.646 - 10889.058: 96.4051% ( 8) 00:07:50.556 10889.058 - 10939.471: 96.4844% ( 14) 00:07:50.556 10939.471 - 10989.883: 96.5466% ( 11) 00:07:50.556 10989.883 - 11040.295: 96.6202% ( 13) 00:07:50.556 11040.295 - 11090.708: 96.6938% ( 13) 00:07:50.556 11090.708 - 11141.120: 96.7561% ( 11) 00:07:50.556 11141.120 - 11191.532: 96.8467% ( 16) 00:07:50.556 11191.532 - 11241.945: 96.9146% ( 12) 00:07:50.556 11241.945 - 11292.357: 96.9939% ( 14) 00:07:50.556 11292.357 - 11342.769: 97.0618% ( 12) 00:07:50.556 11342.769 - 11393.182: 97.1184% ( 10) 00:07:50.556 11393.182 - 11443.594: 97.1864% ( 12) 00:07:50.556 11443.594 - 11494.006: 97.2430% ( 10) 00:07:50.556 11494.006 - 11544.418: 97.2996% ( 10) 00:07:50.556 11544.418 - 11594.831: 97.3505% ( 9) 00:07:50.556 11594.831 - 11645.243: 97.4128% ( 11) 00:07:50.556 11645.243 - 11695.655: 97.4638% ( 9) 00:07:50.556 11695.655 - 11746.068: 97.5204% ( 10) 00:07:50.556 11746.068 - 11796.480: 97.5713% ( 9) 00:07:50.556 11796.480 - 11846.892: 97.6053% ( 6) 00:07:50.556 11846.892 - 11897.305: 97.6336% ( 5) 00:07:50.556 11897.305 - 11947.717: 97.6676% ( 6) 00:07:50.556 11947.717 - 11998.129: 97.7072% ( 7) 00:07:50.556 11998.129 - 12048.542: 97.7638% ( 10) 00:07:50.556 12048.542 - 12098.954: 97.8317% ( 12) 00:07:50.556 12098.954 - 12149.366: 97.8770% ( 8) 00:07:50.556 12149.366 - 12199.778: 97.9393% ( 11) 00:07:50.556 12199.778 - 12250.191: 97.9959% ( 10) 00:07:50.556 12250.191 - 12300.603: 98.0582% ( 11) 00:07:50.556 12300.603 - 12351.015: 98.1035% ( 8) 00:07:50.556 12351.015 - 12401.428: 98.1658% ( 11) 00:07:50.556 12401.428 - 12451.840: 98.2111% ( 8) 00:07:50.556 12451.840 - 12502.252: 98.2563% ( 8) 00:07:50.556 12502.252 - 12552.665: 98.2846% ( 5) 00:07:50.556 12552.665 - 12603.077: 98.3356% ( 9) 00:07:50.556 12603.077 - 12653.489: 98.3639% ( 5) 00:07:50.556 12653.489 - 12703.902: 98.4035% ( 7) 00:07:50.556 12703.902 - 12754.314: 98.4488% ( 8) 00:07:50.556 12754.314 - 12804.726: 98.4941% ( 8) 00:07:50.556 12804.726 - 12855.138: 98.5168% ( 4) 00:07:50.556 12855.138 - 12905.551: 98.5394% ( 4) 00:07:50.556 12905.551 - 13006.375: 98.5790% ( 7) 00:07:50.556 13006.375 - 13107.200: 98.6243% ( 8) 00:07:50.556 13107.200 - 13208.025: 98.6639% ( 7) 00:07:50.556 13208.025 - 13308.849: 98.7092% ( 8) 00:07:50.556 13308.849 - 13409.674: 98.7545% ( 8) 00:07:50.556 13409.674 - 13510.498: 98.7998% ( 8) 00:07:50.556 13510.498 - 13611.323: 98.8621% ( 11) 00:07:50.556 13611.323 - 13712.148: 98.9357% ( 13) 00:07:50.556 13712.148 - 13812.972: 99.0036% ( 12) 00:07:50.556 13812.972 - 13913.797: 99.0602% ( 10) 00:07:50.556 13913.797 - 14014.622: 99.1055% ( 8) 00:07:50.556 14014.622 - 14115.446: 99.1452% ( 7) 00:07:50.556 14115.446 - 14216.271: 99.1904% ( 8) 00:07:50.556 14216.271 - 14317.095: 99.2301% ( 7) 00:07:50.556 14317.095 - 14417.920: 99.2754% ( 8) 00:07:50.556 14821.218 - 14922.043: 99.2980% ( 4) 00:07:50.556 14922.043 - 15022.868: 99.3320% ( 6) 00:07:50.556 15022.868 - 15123.692: 99.3659% ( 6) 00:07:50.556 15123.692 - 15224.517: 99.3999% ( 6) 00:07:50.556 15224.517 - 15325.342: 99.4339% ( 6) 00:07:50.556 15325.342 - 15426.166: 99.4678% ( 6) 00:07:50.556 15426.166 - 15526.991: 99.4905% ( 4) 00:07:50.556 15526.991 - 15627.815: 99.5245% ( 6) 00:07:50.556 15627.815 - 15728.640: 99.5471% ( 4) 00:07:50.556 15728.640 - 15829.465: 99.5697% ( 4) 00:07:50.556 15829.465 - 15930.289: 99.5924% ( 4) 00:07:50.556 15930.289 - 16031.114: 99.6150% ( 4) 00:07:50.556 16031.114 - 16131.938: 99.6320% ( 3) 00:07:50.556 16131.938 - 16232.763: 99.6377% ( 1) 00:07:50.556 23693.785 - 23794.609: 99.6490% ( 2) 00:07:50.556 23794.609 - 23895.434: 99.6660% ( 3) 00:07:50.556 23895.434 - 23996.258: 99.6830% ( 3) 00:07:50.556 23996.258 - 24097.083: 99.7056% ( 4) 00:07:50.556 24097.083 - 24197.908: 99.7283% ( 4) 00:07:50.556 24197.908 - 24298.732: 99.7509% ( 4) 00:07:50.556 24298.732 - 24399.557: 99.7736% ( 4) 00:07:50.556 24399.557 - 24500.382: 99.7905% ( 3) 00:07:50.556 24500.382 - 24601.206: 99.8188% ( 5) 00:07:50.556 24601.206 - 24702.031: 99.8471% ( 5) 00:07:50.556 24702.031 - 24802.855: 99.8811% ( 6) 00:07:50.556 24802.855 - 24903.680: 99.9151% ( 6) 00:07:50.556 24903.680 - 25004.505: 99.9490% ( 6) 00:07:50.556 25004.505 - 25105.329: 99.9830% ( 6) 00:07:50.556 25105.329 - 25206.154: 100.0000% ( 3) 00:07:50.556 00:07:50.556 21:13:39 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:51.491 Initializing NVMe Controllers 00:07:51.491 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.491 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.491 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.491 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.491 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:51.491 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:51.491 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:51.491 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:51.491 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:51.491 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:51.491 Initialization complete. Launching workers. 00:07:51.491 ======================================================== 00:07:51.491 Latency(us) 00:07:51.491 Device Information : IOPS MiB/s Average min max 00:07:51.491 PCIE (0000:00:10.0) NSID 1 from core 0: 16597.87 194.51 7714.09 5181.21 27978.30 00:07:51.491 PCIE (0000:00:11.0) NSID 1 from core 0: 16597.87 194.51 7707.19 5058.77 27243.37 00:07:51.491 PCIE (0000:00:13.0) NSID 1 from core 0: 16597.87 194.51 7700.87 4171.13 26482.22 00:07:51.491 PCIE (0000:00:12.0) NSID 1 from core 0: 16597.87 194.51 7694.32 4096.66 25626.13 00:07:51.491 PCIE (0000:00:12.0) NSID 2 from core 0: 16597.87 194.51 7687.65 3683.07 25064.75 00:07:51.491 PCIE (0000:00:12.0) NSID 3 from core 0: 16597.87 194.51 7681.08 3326.06 25095.16 00:07:51.491 ======================================================== 00:07:51.491 Total : 99587.25 1167.04 7697.53 3326.06 27978.30 00:07:51.491 00:07:51.491 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.491 ================================================================================= 00:07:51.491 1.00000% : 6503.188us 00:07:51.491 10.00000% : 6906.486us 00:07:51.491 25.00000% : 7108.135us 00:07:51.491 50.00000% : 7410.609us 00:07:51.491 75.00000% : 7864.320us 00:07:51.491 90.00000% : 8570.092us 00:07:51.491 95.00000% : 9326.277us 00:07:51.491 98.00000% : 12199.778us 00:07:51.491 99.00000% : 13611.323us 00:07:51.491 99.50000% : 17442.658us 00:07:51.491 99.90000% : 27424.295us 00:07:51.491 99.99000% : 28029.243us 00:07:51.491 99.99900% : 28029.243us 00:07:51.491 99.99990% : 28029.243us 00:07:51.491 99.99999% : 28029.243us 00:07:51.491 00:07:51.491 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.491 ================================================================================= 00:07:51.491 1.00000% : 6503.188us 00:07:51.491 10.00000% : 7007.311us 00:07:51.491 25.00000% : 7158.548us 00:07:51.491 50.00000% : 7410.609us 00:07:51.491 75.00000% : 7763.495us 00:07:51.491 90.00000% : 8469.268us 00:07:51.491 95.00000% : 9477.514us 00:07:51.491 98.00000% : 11846.892us 00:07:51.491 99.00000% : 13510.498us 00:07:51.491 99.50000% : 18047.606us 00:07:51.491 99.90000% : 27020.997us 00:07:51.491 99.99000% : 27424.295us 00:07:51.491 99.99900% : 27424.295us 00:07:51.491 99.99990% : 27424.295us 00:07:51.491 99.99999% : 27424.295us 00:07:51.491 00:07:51.491 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.491 ================================================================================= 00:07:51.491 1.00000% : 6553.600us 00:07:51.491 10.00000% : 6956.898us 00:07:51.491 25.00000% : 7158.548us 00:07:51.491 50.00000% : 7410.609us 00:07:51.491 75.00000% : 7763.495us 00:07:51.491 90.00000% : 8469.268us 00:07:51.491 95.00000% : 9326.277us 00:07:51.491 98.00000% : 12149.366us 00:07:51.491 99.00000% : 14014.622us 00:07:51.491 99.50000% : 18854.203us 00:07:51.491 99.90000% : 26214.400us 00:07:51.491 99.99000% : 26617.698us 00:07:51.491 99.99900% : 26617.698us 00:07:51.491 99.99990% : 26617.698us 00:07:51.491 99.99999% : 26617.698us 00:07:51.491 00:07:51.491 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.491 ================================================================================= 00:07:51.491 1.00000% : 6452.775us 00:07:51.491 10.00000% : 6956.898us 00:07:51.491 25.00000% : 7158.548us 00:07:51.491 50.00000% : 7410.609us 00:07:51.491 75.00000% : 7763.495us 00:07:51.491 90.00000% : 8519.680us 00:07:51.491 95.00000% : 9275.865us 00:07:51.491 98.00000% : 11443.594us 00:07:51.491 99.00000% : 13913.797us 00:07:51.491 99.50000% : 19257.502us 00:07:51.491 99.90000% : 25407.803us 00:07:51.491 99.99000% : 25609.452us 00:07:51.491 99.99900% : 25710.277us 00:07:51.491 99.99990% : 25710.277us 00:07:51.491 99.99999% : 25710.277us 00:07:51.491 00:07:51.491 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.491 ================================================================================= 00:07:51.491 1.00000% : 6503.188us 00:07:51.491 10.00000% : 7007.311us 00:07:51.491 25.00000% : 7158.548us 00:07:51.491 50.00000% : 7410.609us 00:07:51.491 75.00000% : 7763.495us 00:07:51.491 90.00000% : 8519.680us 00:07:51.491 95.00000% : 9275.865us 00:07:51.491 98.00000% : 11191.532us 00:07:51.491 99.00000% : 13913.797us 00:07:51.491 99.50000% : 19559.975us 00:07:51.491 99.90000% : 24903.680us 00:07:51.491 99.99000% : 25105.329us 00:07:51.491 99.99900% : 25105.329us 00:07:51.491 99.99990% : 25105.329us 00:07:51.491 99.99999% : 25105.329us 00:07:51.491 00:07:51.491 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.491 ================================================================================= 00:07:51.491 1.00000% : 6452.775us 00:07:51.491 10.00000% : 7007.311us 00:07:51.491 25.00000% : 7158.548us 00:07:51.491 50.00000% : 7410.609us 00:07:51.491 75.00000% : 7763.495us 00:07:51.491 90.00000% : 8570.092us 00:07:51.491 95.00000% : 9175.040us 00:07:51.491 98.00000% : 11494.006us 00:07:51.491 99.00000% : 13510.498us 00:07:51.491 99.50000% : 19459.151us 00:07:51.491 99.90000% : 24903.680us 00:07:51.491 99.99000% : 25105.329us 00:07:51.491 99.99900% : 25105.329us 00:07:51.491 99.99990% : 25105.329us 00:07:51.491 99.99999% : 25105.329us 00:07:51.491 00:07:51.491 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.491 ============================================================================== 00:07:51.491 Range in us Cumulative IO count 00:07:51.491 5167.262 - 5192.468: 0.0060% ( 1) 00:07:51.491 5192.468 - 5217.674: 0.0240% ( 3) 00:07:51.491 5217.674 - 5242.880: 0.0361% ( 2) 00:07:51.491 5242.880 - 5268.086: 0.0481% ( 2) 00:07:51.491 5268.086 - 5293.292: 0.0661% ( 3) 00:07:51.491 5318.498 - 5343.705: 0.0781% ( 2) 00:07:51.491 5343.705 - 5368.911: 0.0962% ( 3) 00:07:51.491 5368.911 - 5394.117: 0.1202% ( 4) 00:07:51.491 5394.117 - 5419.323: 0.1442% ( 4) 00:07:51.491 5419.323 - 5444.529: 0.1562% ( 2) 00:07:51.491 5444.529 - 5469.735: 0.2043% ( 8) 00:07:51.491 5469.735 - 5494.942: 0.2163% ( 2) 00:07:51.491 5494.942 - 5520.148: 0.2284% ( 2) 00:07:51.491 5520.148 - 5545.354: 0.2404% ( 2) 00:07:51.491 5595.766 - 5620.972: 0.2464% ( 1) 00:07:51.491 5620.972 - 5646.178: 0.2584% ( 2) 00:07:51.491 5646.178 - 5671.385: 0.2704% ( 2) 00:07:51.491 5671.385 - 5696.591: 0.2825% ( 2) 00:07:51.491 5696.591 - 5721.797: 0.2945% ( 2) 00:07:51.491 5721.797 - 5747.003: 0.3065% ( 2) 00:07:51.491 5747.003 - 5772.209: 0.3245% ( 3) 00:07:51.491 5772.209 - 5797.415: 0.3365% ( 2) 00:07:51.491 5797.415 - 5822.622: 0.3425% ( 1) 00:07:51.491 5822.622 - 5847.828: 0.3606% ( 3) 00:07:51.491 5847.828 - 5873.034: 0.3726% ( 2) 00:07:51.491 5873.034 - 5898.240: 0.3846% ( 2) 00:07:51.491 6200.714 - 6225.920: 0.3906% ( 1) 00:07:51.491 6225.920 - 6251.126: 0.4087% ( 3) 00:07:51.491 6276.332 - 6301.538: 0.4207% ( 2) 00:07:51.491 6301.538 - 6326.745: 0.4627% ( 7) 00:07:51.491 6326.745 - 6351.951: 0.5228% ( 10) 00:07:51.491 6351.951 - 6377.157: 0.5889% ( 11) 00:07:51.491 6377.157 - 6402.363: 0.6851% ( 16) 00:07:51.491 6402.363 - 6427.569: 0.8053% ( 20) 00:07:51.491 6427.569 - 6452.775: 0.9315% ( 21) 00:07:51.491 6452.775 - 6503.188: 1.1599% ( 38) 00:07:51.491 6503.188 - 6553.600: 1.5144% ( 59) 00:07:51.491 6553.600 - 6604.012: 1.9892% ( 79) 00:07:51.491 6604.012 - 6654.425: 2.9026% ( 152) 00:07:51.491 6654.425 - 6704.837: 3.7981% ( 149) 00:07:51.491 6704.837 - 6755.249: 4.9639% ( 194) 00:07:51.492 6755.249 - 6805.662: 6.7428% ( 296) 00:07:51.492 6805.662 - 6856.074: 9.0144% ( 378) 00:07:51.492 6856.074 - 6906.486: 12.1214% ( 517) 00:07:51.492 6906.486 - 6956.898: 14.8197% ( 449) 00:07:51.492 6956.898 - 7007.311: 18.4135% ( 598) 00:07:51.492 7007.311 - 7057.723: 22.1034% ( 614) 00:07:51.492 7057.723 - 7108.135: 25.8654% ( 626) 00:07:51.492 7108.135 - 7158.548: 30.0901% ( 703) 00:07:51.492 7158.548 - 7208.960: 34.6274% ( 755) 00:07:51.492 7208.960 - 7259.372: 39.0445% ( 735) 00:07:51.492 7259.372 - 7309.785: 43.1190% ( 678) 00:07:51.492 7309.785 - 7360.197: 47.0012% ( 646) 00:07:51.492 7360.197 - 7410.609: 50.8293% ( 637) 00:07:51.492 7410.609 - 7461.022: 54.5553% ( 620) 00:07:51.492 7461.022 - 7511.434: 58.2151% ( 609) 00:07:51.492 7511.434 - 7561.846: 61.4243% ( 534) 00:07:51.492 7561.846 - 7612.258: 64.4952% ( 511) 00:07:51.492 7612.258 - 7662.671: 67.3798% ( 480) 00:07:51.492 7662.671 - 7713.083: 70.0120% ( 438) 00:07:51.492 7713.083 - 7763.495: 72.5421% ( 421) 00:07:51.492 7763.495 - 7813.908: 74.8317% ( 381) 00:07:51.492 7813.908 - 7864.320: 76.8510% ( 336) 00:07:51.492 7864.320 - 7914.732: 78.5938% ( 290) 00:07:51.492 7914.732 - 7965.145: 80.2284% ( 272) 00:07:51.492 7965.145 - 8015.557: 81.7428% ( 252) 00:07:51.492 8015.557 - 8065.969: 82.9808% ( 206) 00:07:51.492 8065.969 - 8116.382: 84.1346% ( 192) 00:07:51.492 8116.382 - 8166.794: 85.0841% ( 158) 00:07:51.492 8166.794 - 8217.206: 86.0757% ( 165) 00:07:51.492 8217.206 - 8267.618: 86.9231% ( 141) 00:07:51.492 8267.618 - 8318.031: 87.6923% ( 128) 00:07:51.492 8318.031 - 8368.443: 88.3714% ( 113) 00:07:51.492 8368.443 - 8418.855: 88.9002% ( 88) 00:07:51.492 8418.855 - 8469.268: 89.3329% ( 72) 00:07:51.492 8469.268 - 8519.680: 89.7055% ( 62) 00:07:51.492 8519.680 - 8570.092: 90.1502% ( 74) 00:07:51.492 8570.092 - 8620.505: 90.5409% ( 65) 00:07:51.492 8620.505 - 8670.917: 90.9495% ( 68) 00:07:51.492 8670.917 - 8721.329: 91.3041% ( 59) 00:07:51.492 8721.329 - 8771.742: 91.7548% ( 75) 00:07:51.492 8771.742 - 8822.154: 92.1034% ( 58) 00:07:51.492 8822.154 - 8872.566: 92.3918% ( 48) 00:07:51.492 8872.566 - 8922.978: 92.6262% ( 39) 00:07:51.492 8922.978 - 8973.391: 93.0048% ( 63) 00:07:51.492 8973.391 - 9023.803: 93.3714% ( 61) 00:07:51.492 9023.803 - 9074.215: 93.7019% ( 55) 00:07:51.492 9074.215 - 9124.628: 94.0505% ( 58) 00:07:51.492 9124.628 - 9175.040: 94.3389% ( 48) 00:07:51.492 9175.040 - 9225.452: 94.6454% ( 51) 00:07:51.492 9225.452 - 9275.865: 94.8498% ( 34) 00:07:51.492 9275.865 - 9326.277: 95.1082% ( 43) 00:07:51.492 9326.277 - 9376.689: 95.3425% ( 39) 00:07:51.492 9376.689 - 9427.102: 95.6010% ( 43) 00:07:51.492 9427.102 - 9477.514: 95.8954% ( 49) 00:07:51.492 9477.514 - 9527.926: 96.0697% ( 29) 00:07:51.492 9527.926 - 9578.338: 96.1478% ( 13) 00:07:51.492 9578.338 - 9628.751: 96.2440% ( 16) 00:07:51.492 9628.751 - 9679.163: 96.3281% ( 14) 00:07:51.492 9679.163 - 9729.575: 96.3822% ( 9) 00:07:51.492 9729.575 - 9779.988: 96.4784% ( 16) 00:07:51.492 9779.988 - 9830.400: 96.5925% ( 19) 00:07:51.492 9830.400 - 9880.812: 96.6707% ( 13) 00:07:51.492 9880.812 - 9931.225: 96.7428% ( 12) 00:07:51.492 9931.225 - 9981.637: 96.8089% ( 11) 00:07:51.492 9981.637 - 10032.049: 96.8750% ( 11) 00:07:51.492 10032.049 - 10082.462: 96.9231% ( 8) 00:07:51.492 10082.462 - 10132.874: 97.0012% ( 13) 00:07:51.492 10132.874 - 10183.286: 97.0192% ( 3) 00:07:51.492 10183.286 - 10233.698: 97.0493% ( 5) 00:07:51.492 10233.698 - 10284.111: 97.0733% ( 4) 00:07:51.492 10284.111 - 10334.523: 97.0974% ( 4) 00:07:51.492 10334.523 - 10384.935: 97.1334% ( 6) 00:07:51.492 10384.935 - 10435.348: 97.1755% ( 7) 00:07:51.492 10435.348 - 10485.760: 97.2055% ( 5) 00:07:51.492 10485.760 - 10536.172: 97.2236% ( 3) 00:07:51.492 10536.172 - 10586.585: 97.2596% ( 6) 00:07:51.492 10586.585 - 10636.997: 97.2897% ( 5) 00:07:51.492 10636.997 - 10687.409: 97.3377% ( 8) 00:07:51.492 10687.409 - 10737.822: 97.3678% ( 5) 00:07:51.492 10737.822 - 10788.234: 97.4099% ( 7) 00:07:51.492 10788.234 - 10838.646: 97.4459% ( 6) 00:07:51.492 10838.646 - 10889.058: 97.4700% ( 4) 00:07:51.492 10889.058 - 10939.471: 97.4880% ( 3) 00:07:51.492 10939.471 - 10989.883: 97.5060% ( 3) 00:07:51.492 10989.883 - 11040.295: 97.5180% ( 2) 00:07:51.492 11040.295 - 11090.708: 97.5421% ( 4) 00:07:51.492 11090.708 - 11141.120: 97.5601% ( 3) 00:07:51.492 11141.120 - 11191.532: 97.5781% ( 3) 00:07:51.492 11191.532 - 11241.945: 97.5962% ( 3) 00:07:51.492 11241.945 - 11292.357: 97.6142% ( 3) 00:07:51.492 11292.357 - 11342.769: 97.6262% ( 2) 00:07:51.492 11342.769 - 11393.182: 97.6442% ( 3) 00:07:51.492 11393.182 - 11443.594: 97.6623% ( 3) 00:07:51.492 11443.594 - 11494.006: 97.6863% ( 4) 00:07:51.492 11494.006 - 11544.418: 97.6923% ( 1) 00:07:51.492 11645.243 - 11695.655: 97.6983% ( 1) 00:07:51.492 11796.480 - 11846.892: 97.7043% ( 1) 00:07:51.492 11846.892 - 11897.305: 97.7103% ( 1) 00:07:51.492 11897.305 - 11947.717: 97.7224% ( 2) 00:07:51.492 11947.717 - 11998.129: 97.7764% ( 9) 00:07:51.492 11998.129 - 12048.542: 97.8726% ( 16) 00:07:51.492 12048.542 - 12098.954: 97.9387% ( 11) 00:07:51.492 12098.954 - 12149.366: 97.9868% ( 8) 00:07:51.492 12149.366 - 12199.778: 98.0409% ( 9) 00:07:51.492 12199.778 - 12250.191: 98.0829% ( 7) 00:07:51.492 12250.191 - 12300.603: 98.1250% ( 7) 00:07:51.492 12300.603 - 12351.015: 98.1731% ( 8) 00:07:51.492 12351.015 - 12401.428: 98.2031% ( 5) 00:07:51.492 12401.428 - 12451.840: 98.2091% ( 1) 00:07:51.492 12451.840 - 12502.252: 98.2512% ( 7) 00:07:51.492 12502.252 - 12552.665: 98.3113% ( 10) 00:07:51.492 12552.665 - 12603.077: 98.3413% ( 5) 00:07:51.492 12603.077 - 12653.489: 98.3774% ( 6) 00:07:51.492 12653.489 - 12703.902: 98.4375% ( 10) 00:07:51.492 12703.902 - 12754.314: 98.4796% ( 7) 00:07:51.492 12754.314 - 12804.726: 98.5276% ( 8) 00:07:51.492 12804.726 - 12855.138: 98.5757% ( 8) 00:07:51.492 12855.138 - 12905.551: 98.6178% ( 7) 00:07:51.492 12905.551 - 13006.375: 98.7139% ( 16) 00:07:51.492 13006.375 - 13107.200: 98.7861% ( 12) 00:07:51.492 13107.200 - 13208.025: 98.8582% ( 12) 00:07:51.492 13208.025 - 13308.849: 98.9002% ( 7) 00:07:51.492 13308.849 - 13409.674: 98.9423% ( 7) 00:07:51.492 13409.674 - 13510.498: 98.9784% ( 6) 00:07:51.492 13510.498 - 13611.323: 99.0204% ( 7) 00:07:51.492 13611.323 - 13712.148: 99.0625% ( 7) 00:07:51.492 13712.148 - 13812.972: 99.1166% ( 9) 00:07:51.492 13812.972 - 13913.797: 99.1887% ( 12) 00:07:51.492 13913.797 - 14014.622: 99.2007% ( 2) 00:07:51.492 14417.920 - 14518.745: 99.2067% ( 1) 00:07:51.492 14518.745 - 14619.569: 99.2188% ( 2) 00:07:51.492 14619.569 - 14720.394: 99.2308% ( 2) 00:07:51.492 15728.640 - 15829.465: 99.2488% ( 3) 00:07:51.492 15829.465 - 15930.289: 99.2608% ( 2) 00:07:51.492 15930.289 - 16031.114: 99.2849% ( 4) 00:07:51.492 16031.114 - 16131.938: 99.2969% ( 2) 00:07:51.492 16131.938 - 16232.763: 99.3089% ( 2) 00:07:51.492 16232.763 - 16333.588: 99.3389% ( 5) 00:07:51.492 16333.588 - 16434.412: 99.3690% ( 5) 00:07:51.492 16535.237 - 16636.062: 99.3810% ( 2) 00:07:51.492 16736.886 - 16837.711: 99.3990% ( 3) 00:07:51.492 16837.711 - 16938.535: 99.4231% ( 4) 00:07:51.492 16938.535 - 17039.360: 99.4411% ( 3) 00:07:51.492 17039.360 - 17140.185: 99.4591% ( 3) 00:07:51.492 17140.185 - 17241.009: 99.4772% ( 3) 00:07:51.492 17241.009 - 17341.834: 99.4892% ( 2) 00:07:51.492 17341.834 - 17442.658: 99.5072% ( 3) 00:07:51.492 17442.658 - 17543.483: 99.5192% ( 2) 00:07:51.492 17543.483 - 17644.308: 99.5312% ( 2) 00:07:51.492 17644.308 - 17745.132: 99.5493% ( 3) 00:07:51.492 17745.132 - 17845.957: 99.5613% ( 2) 00:07:51.492 17845.957 - 17946.782: 99.5853% ( 4) 00:07:51.492 17946.782 - 18047.606: 99.6034% ( 3) 00:07:51.492 18047.606 - 18148.431: 99.6094% ( 1) 00:07:51.492 18148.431 - 18249.255: 99.6154% ( 1) 00:07:51.492 25609.452 - 25710.277: 99.6214% ( 1) 00:07:51.492 25710.277 - 25811.102: 99.6334% ( 2) 00:07:51.492 25811.102 - 26012.751: 99.6695% ( 6) 00:07:51.492 26012.751 - 26214.400: 99.7055% ( 6) 00:07:51.492 26214.400 - 26416.049: 99.7416% ( 6) 00:07:51.492 26416.049 - 26617.698: 99.7776% ( 6) 00:07:51.492 26617.698 - 26819.348: 99.8137% ( 6) 00:07:51.492 26819.348 - 27020.997: 99.8438% ( 5) 00:07:51.492 27020.997 - 27222.646: 99.8798% ( 6) 00:07:51.492 27222.646 - 27424.295: 99.9159% ( 6) 00:07:51.492 27424.295 - 27625.945: 99.9399% ( 4) 00:07:51.492 27625.945 - 27827.594: 99.9760% ( 6) 00:07:51.492 27827.594 - 28029.243: 100.0000% ( 4) 00:07:51.492 00:07:51.492 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.492 ============================================================================== 00:07:51.492 Range in us Cumulative IO count 00:07:51.492 5041.231 - 5066.437: 0.0060% ( 1) 00:07:51.492 5066.437 - 5091.643: 0.0361% ( 5) 00:07:51.492 5091.643 - 5116.849: 0.0781% ( 7) 00:07:51.492 5116.849 - 5142.055: 0.1202% ( 7) 00:07:51.492 5142.055 - 5167.262: 0.1863% ( 11) 00:07:51.492 5167.262 - 5192.468: 0.2284% ( 7) 00:07:51.492 5192.468 - 5217.674: 0.2524% ( 4) 00:07:51.492 5217.674 - 5242.880: 0.2704% ( 3) 00:07:51.492 5242.880 - 5268.086: 0.2825% ( 2) 00:07:51.492 5268.086 - 5293.292: 0.2945% ( 2) 00:07:51.492 5293.292 - 5318.498: 0.3065% ( 2) 00:07:51.492 5318.498 - 5343.705: 0.3245% ( 3) 00:07:51.492 5343.705 - 5368.911: 0.3365% ( 2) 00:07:51.492 5368.911 - 5394.117: 0.3486% ( 2) 00:07:51.492 5394.117 - 5419.323: 0.3606% ( 2) 00:07:51.492 5419.323 - 5444.529: 0.3786% ( 3) 00:07:51.492 5444.529 - 5469.735: 0.3846% ( 1) 00:07:51.492 6099.889 - 6125.095: 0.3906% ( 1) 00:07:51.492 6276.332 - 6301.538: 0.4026% ( 2) 00:07:51.492 6301.538 - 6326.745: 0.4147% ( 2) 00:07:51.492 6326.745 - 6351.951: 0.4267% ( 2) 00:07:51.493 6351.951 - 6377.157: 0.4567% ( 5) 00:07:51.493 6377.157 - 6402.363: 0.4988% ( 7) 00:07:51.493 6402.363 - 6427.569: 0.5950% ( 16) 00:07:51.493 6427.569 - 6452.775: 0.7212% ( 21) 00:07:51.493 6452.775 - 6503.188: 1.0096% ( 48) 00:07:51.493 6503.188 - 6553.600: 1.2981% ( 48) 00:07:51.493 6553.600 - 6604.012: 1.4784% ( 30) 00:07:51.493 6604.012 - 6654.425: 1.7909% ( 52) 00:07:51.493 6654.425 - 6704.837: 2.2115% ( 70) 00:07:51.493 6704.837 - 6755.249: 2.9868% ( 129) 00:07:51.493 6755.249 - 6805.662: 4.0565% ( 178) 00:07:51.493 6805.662 - 6856.074: 5.2644% ( 201) 00:07:51.493 6856.074 - 6906.486: 7.2656% ( 333) 00:07:51.493 6906.486 - 6956.898: 9.4651% ( 366) 00:07:51.493 6956.898 - 7007.311: 12.8245% ( 559) 00:07:51.493 7007.311 - 7057.723: 16.7788% ( 658) 00:07:51.493 7057.723 - 7108.135: 20.8053% ( 670) 00:07:51.493 7108.135 - 7158.548: 25.5168% ( 784) 00:07:51.493 7158.548 - 7208.960: 30.9916% ( 911) 00:07:51.493 7208.960 - 7259.372: 36.6346% ( 939) 00:07:51.493 7259.372 - 7309.785: 42.5481% ( 984) 00:07:51.493 7309.785 - 7360.197: 48.0288% ( 912) 00:07:51.493 7360.197 - 7410.609: 53.2632% ( 871) 00:07:51.493 7410.609 - 7461.022: 57.9808% ( 785) 00:07:51.493 7461.022 - 7511.434: 62.9026% ( 819) 00:07:51.493 7511.434 - 7561.846: 66.5986% ( 615) 00:07:51.493 7561.846 - 7612.258: 69.6635% ( 510) 00:07:51.493 7612.258 - 7662.671: 72.5962% ( 488) 00:07:51.493 7662.671 - 7713.083: 74.6995% ( 350) 00:07:51.493 7713.083 - 7763.495: 76.4663% ( 294) 00:07:51.493 7763.495 - 7813.908: 77.9447% ( 246) 00:07:51.493 7813.908 - 7864.320: 79.2308% ( 214) 00:07:51.493 7864.320 - 7914.732: 80.3365% ( 184) 00:07:51.493 7914.732 - 7965.145: 81.8029% ( 244) 00:07:51.493 7965.145 - 8015.557: 82.9147% ( 185) 00:07:51.493 8015.557 - 8065.969: 84.0325% ( 186) 00:07:51.493 8065.969 - 8116.382: 85.2464% ( 202) 00:07:51.493 8116.382 - 8166.794: 86.4243% ( 196) 00:07:51.493 8166.794 - 8217.206: 87.4459% ( 170) 00:07:51.493 8217.206 - 8267.618: 88.1851% ( 123) 00:07:51.493 8267.618 - 8318.031: 88.6478% ( 77) 00:07:51.493 8318.031 - 8368.443: 89.0805% ( 72) 00:07:51.493 8368.443 - 8418.855: 89.5192% ( 73) 00:07:51.493 8418.855 - 8469.268: 90.0361% ( 86) 00:07:51.493 8469.268 - 8519.680: 90.4808% ( 74) 00:07:51.493 8519.680 - 8570.092: 90.8173% ( 56) 00:07:51.493 8570.092 - 8620.505: 91.1238% ( 51) 00:07:51.493 8620.505 - 8670.917: 91.3101% ( 31) 00:07:51.493 8670.917 - 8721.329: 91.5685% ( 43) 00:07:51.493 8721.329 - 8771.742: 91.8089% ( 40) 00:07:51.493 8771.742 - 8822.154: 92.0974% ( 48) 00:07:51.493 8822.154 - 8872.566: 92.3678% ( 45) 00:07:51.493 8872.566 - 8922.978: 92.6382% ( 45) 00:07:51.493 8922.978 - 8973.391: 93.0048% ( 61) 00:07:51.493 8973.391 - 9023.803: 93.2091% ( 34) 00:07:51.493 9023.803 - 9074.215: 93.4135% ( 34) 00:07:51.493 9074.215 - 9124.628: 93.6959% ( 47) 00:07:51.493 9124.628 - 9175.040: 93.9964% ( 50) 00:07:51.493 9175.040 - 9225.452: 94.2067% ( 35) 00:07:51.493 9225.452 - 9275.865: 94.3810% ( 29) 00:07:51.493 9275.865 - 9326.277: 94.5673% ( 31) 00:07:51.493 9326.277 - 9376.689: 94.7656% ( 33) 00:07:51.493 9376.689 - 9427.102: 94.9820% ( 36) 00:07:51.493 9427.102 - 9477.514: 95.2524% ( 45) 00:07:51.493 9477.514 - 9527.926: 95.4808% ( 38) 00:07:51.493 9527.926 - 9578.338: 95.9255% ( 74) 00:07:51.493 9578.338 - 9628.751: 96.1959% ( 45) 00:07:51.493 9628.751 - 9679.163: 96.4483% ( 42) 00:07:51.493 9679.163 - 9729.575: 96.6106% ( 27) 00:07:51.493 9729.575 - 9779.988: 96.7188% ( 18) 00:07:51.493 9779.988 - 9830.400: 96.8149% ( 16) 00:07:51.493 9830.400 - 9880.812: 96.8810% ( 11) 00:07:51.493 9880.812 - 9931.225: 96.9111% ( 5) 00:07:51.493 9931.225 - 9981.637: 96.9231% ( 2) 00:07:51.493 10032.049 - 10082.462: 96.9351% ( 2) 00:07:51.493 10082.462 - 10132.874: 96.9832% ( 8) 00:07:51.493 10132.874 - 10183.286: 97.0312% ( 8) 00:07:51.493 10183.286 - 10233.698: 97.0733% ( 7) 00:07:51.493 10233.698 - 10284.111: 97.1334% ( 10) 00:07:51.493 10284.111 - 10334.523: 97.1935% ( 10) 00:07:51.493 10334.523 - 10384.935: 97.3077% ( 19) 00:07:51.493 10384.935 - 10435.348: 97.3618% ( 9) 00:07:51.493 10435.348 - 10485.760: 97.4038% ( 7) 00:07:51.493 10485.760 - 10536.172: 97.4519% ( 8) 00:07:51.493 10536.172 - 10586.585: 97.5000% ( 8) 00:07:51.493 10586.585 - 10636.997: 97.5361% ( 6) 00:07:51.493 10636.997 - 10687.409: 97.5661% ( 5) 00:07:51.493 10687.409 - 10737.822: 97.5841% ( 3) 00:07:51.493 10737.822 - 10788.234: 97.6082% ( 4) 00:07:51.493 10788.234 - 10838.646: 97.6262% ( 3) 00:07:51.493 10838.646 - 10889.058: 97.6502% ( 4) 00:07:51.493 10889.058 - 10939.471: 97.6683% ( 3) 00:07:51.493 10939.471 - 10989.883: 97.6923% ( 4) 00:07:51.493 11241.945 - 11292.357: 97.6983% ( 1) 00:07:51.805 11494.006 - 11544.418: 97.7103% ( 2) 00:07:51.805 11544.418 - 11594.831: 97.7404% ( 5) 00:07:51.805 11594.831 - 11645.243: 97.7704% ( 5) 00:07:51.805 11645.243 - 11695.655: 97.8666% ( 16) 00:07:51.805 11695.655 - 11746.068: 97.9507% ( 14) 00:07:51.805 11746.068 - 11796.480: 97.9808% ( 5) 00:07:51.805 11796.480 - 11846.892: 98.0228% ( 7) 00:07:51.805 11846.892 - 11897.305: 98.0769% ( 9) 00:07:51.805 11897.305 - 11947.717: 98.1611% ( 14) 00:07:51.805 11947.717 - 11998.129: 98.1911% ( 5) 00:07:51.805 11998.129 - 12048.542: 98.2031% ( 2) 00:07:51.805 12048.542 - 12098.954: 98.2212% ( 3) 00:07:51.805 12098.954 - 12149.366: 98.2392% ( 3) 00:07:51.805 12149.366 - 12199.778: 98.2632% ( 4) 00:07:51.805 12199.778 - 12250.191: 98.2752% ( 2) 00:07:51.805 12250.191 - 12300.603: 98.3053% ( 5) 00:07:51.805 12300.603 - 12351.015: 98.3534% ( 8) 00:07:51.805 12351.015 - 12401.428: 98.3954% ( 7) 00:07:51.805 12401.428 - 12451.840: 98.4495% ( 9) 00:07:51.805 12451.840 - 12502.252: 98.4856% ( 6) 00:07:51.805 12502.252 - 12552.665: 98.5457% ( 10) 00:07:51.805 12552.665 - 12603.077: 98.6779% ( 22) 00:07:51.805 12603.077 - 12653.489: 98.7079% ( 5) 00:07:51.805 12653.489 - 12703.902: 98.7320% ( 4) 00:07:51.805 12703.902 - 12754.314: 98.7560% ( 4) 00:07:51.805 12754.314 - 12804.726: 98.7800% ( 4) 00:07:51.805 12804.726 - 12855.138: 98.8161% ( 6) 00:07:51.805 12855.138 - 12905.551: 98.8462% ( 5) 00:07:51.805 12905.551 - 13006.375: 98.8822% ( 6) 00:07:51.805 13006.375 - 13107.200: 98.9123% ( 5) 00:07:51.805 13107.200 - 13208.025: 98.9363% ( 4) 00:07:51.805 13208.025 - 13308.849: 98.9663% ( 5) 00:07:51.805 13308.849 - 13409.674: 98.9904% ( 4) 00:07:51.805 13409.674 - 13510.498: 99.0144% ( 4) 00:07:51.805 13510.498 - 13611.323: 99.0385% ( 4) 00:07:51.805 13611.323 - 13712.148: 99.0625% ( 4) 00:07:51.805 13712.148 - 13812.972: 99.0865% ( 4) 00:07:51.805 13812.972 - 13913.797: 99.1466% ( 10) 00:07:51.805 13913.797 - 14014.622: 99.1887% ( 7) 00:07:51.805 14014.622 - 14115.446: 99.2127% ( 4) 00:07:51.805 14115.446 - 14216.271: 99.2308% ( 3) 00:07:51.805 16535.237 - 16636.062: 99.2488% ( 3) 00:07:51.805 16636.062 - 16736.886: 99.2668% ( 3) 00:07:51.805 16736.886 - 16837.711: 99.2849% ( 3) 00:07:51.805 16837.711 - 16938.535: 99.3029% ( 3) 00:07:51.805 16938.535 - 17039.360: 99.3209% ( 3) 00:07:51.805 17039.360 - 17140.185: 99.3389% ( 3) 00:07:51.805 17140.185 - 17241.009: 99.3570% ( 3) 00:07:51.805 17241.009 - 17341.834: 99.3690% ( 2) 00:07:51.805 17341.834 - 17442.658: 99.3870% ( 3) 00:07:51.805 17442.658 - 17543.483: 99.4050% ( 3) 00:07:51.805 17543.483 - 17644.308: 99.4231% ( 3) 00:07:51.805 17644.308 - 17745.132: 99.4471% ( 4) 00:07:51.805 17745.132 - 17845.957: 99.4651% ( 3) 00:07:51.805 17845.957 - 17946.782: 99.4832% ( 3) 00:07:51.805 17946.782 - 18047.606: 99.5012% ( 3) 00:07:51.805 18047.606 - 18148.431: 99.5252% ( 4) 00:07:51.805 18148.431 - 18249.255: 99.5493% ( 4) 00:07:51.805 18249.255 - 18350.080: 99.5733% ( 4) 00:07:51.805 18350.080 - 18450.905: 99.5913% ( 3) 00:07:51.805 18450.905 - 18551.729: 99.6154% ( 4) 00:07:51.805 26012.751 - 26214.400: 99.6394% ( 4) 00:07:51.805 26214.400 - 26416.049: 99.6815% ( 7) 00:07:51.805 26416.049 - 26617.698: 99.7536% ( 12) 00:07:51.805 26617.698 - 26819.348: 99.8377% ( 14) 00:07:51.805 26819.348 - 27020.997: 99.9159% ( 13) 00:07:51.805 27020.997 - 27222.646: 99.9880% ( 12) 00:07:51.805 27222.646 - 27424.295: 100.0000% ( 2) 00:07:51.805 00:07:51.805 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.805 ============================================================================== 00:07:51.805 Range in us Cumulative IO count 00:07:51.805 4159.015 - 4184.222: 0.0060% ( 1) 00:07:51.806 4310.252 - 4335.458: 0.0180% ( 2) 00:07:51.806 4335.458 - 4360.665: 0.0361% ( 3) 00:07:51.806 4360.665 - 4385.871: 0.0601% ( 4) 00:07:51.806 4385.871 - 4411.077: 0.1142% ( 9) 00:07:51.806 4411.077 - 4436.283: 0.1863% ( 12) 00:07:51.806 4436.283 - 4461.489: 0.2344% ( 8) 00:07:51.806 4461.489 - 4486.695: 0.2464% ( 2) 00:07:51.806 4486.695 - 4511.902: 0.2584% ( 2) 00:07:51.806 4511.902 - 4537.108: 0.2704% ( 2) 00:07:51.806 4537.108 - 4562.314: 0.2825% ( 2) 00:07:51.806 4562.314 - 4587.520: 0.3005% ( 3) 00:07:51.806 4587.520 - 4612.726: 0.3125% ( 2) 00:07:51.806 4612.726 - 4637.932: 0.3305% ( 3) 00:07:51.806 4637.932 - 4663.138: 0.3425% ( 2) 00:07:51.806 4663.138 - 4688.345: 0.3546% ( 2) 00:07:51.806 4688.345 - 4713.551: 0.3666% ( 2) 00:07:51.806 4713.551 - 4738.757: 0.3846% ( 3) 00:07:51.806 6099.889 - 6125.095: 0.3906% ( 1) 00:07:51.806 6175.508 - 6200.714: 0.3966% ( 1) 00:07:51.806 6251.126 - 6276.332: 0.4026% ( 1) 00:07:51.806 6276.332 - 6301.538: 0.4147% ( 2) 00:07:51.806 6301.538 - 6326.745: 0.4267% ( 2) 00:07:51.806 6326.745 - 6351.951: 0.4507% ( 4) 00:07:51.806 6351.951 - 6377.157: 0.4748% ( 4) 00:07:51.806 6377.157 - 6402.363: 0.5288% ( 9) 00:07:51.806 6402.363 - 6427.569: 0.5829% ( 9) 00:07:51.806 6427.569 - 6452.775: 0.7091% ( 21) 00:07:51.806 6452.775 - 6503.188: 0.9075% ( 33) 00:07:51.806 6503.188 - 6553.600: 1.1839% ( 46) 00:07:51.806 6553.600 - 6604.012: 1.3882% ( 34) 00:07:51.806 6604.012 - 6654.425: 1.7788% ( 65) 00:07:51.806 6654.425 - 6704.837: 2.2416% ( 77) 00:07:51.806 6704.837 - 6755.249: 3.0950% ( 142) 00:07:51.806 6755.249 - 6805.662: 4.1046% ( 168) 00:07:51.806 6805.662 - 6856.074: 5.7812% ( 279) 00:07:51.806 6856.074 - 6906.486: 7.9147% ( 355) 00:07:51.806 6906.486 - 6956.898: 10.3125% ( 399) 00:07:51.806 6956.898 - 7007.311: 13.5276% ( 535) 00:07:51.806 7007.311 - 7057.723: 17.0192% ( 581) 00:07:51.806 7057.723 - 7108.135: 20.8954% ( 645) 00:07:51.806 7108.135 - 7158.548: 26.0156% ( 852) 00:07:51.806 7158.548 - 7208.960: 31.1298% ( 851) 00:07:51.806 7208.960 - 7259.372: 36.7969% ( 943) 00:07:51.806 7259.372 - 7309.785: 42.0733% ( 878) 00:07:51.806 7309.785 - 7360.197: 47.8005% ( 953) 00:07:51.806 7360.197 - 7410.609: 53.1070% ( 883) 00:07:51.806 7410.609 - 7461.022: 57.6202% ( 751) 00:07:51.806 7461.022 - 7511.434: 61.9111% ( 714) 00:07:51.806 7511.434 - 7561.846: 66.0457% ( 688) 00:07:51.806 7561.846 - 7612.258: 68.8281% ( 463) 00:07:51.806 7612.258 - 7662.671: 71.4423% ( 435) 00:07:51.806 7662.671 - 7713.083: 73.7981% ( 392) 00:07:51.806 7713.083 - 7763.495: 75.9195% ( 353) 00:07:51.806 7763.495 - 7813.908: 78.3113% ( 398) 00:07:51.806 7813.908 - 7864.320: 80.1502% ( 306) 00:07:51.806 7864.320 - 7914.732: 81.7007% ( 258) 00:07:51.806 7914.732 - 7965.145: 82.7945% ( 182) 00:07:51.806 7965.145 - 8015.557: 83.7800% ( 164) 00:07:51.806 8015.557 - 8065.969: 84.6995% ( 153) 00:07:51.806 8065.969 - 8116.382: 85.6971% ( 166) 00:07:51.806 8116.382 - 8166.794: 86.7428% ( 174) 00:07:51.806 8166.794 - 8217.206: 87.3738% ( 105) 00:07:51.806 8217.206 - 8267.618: 87.9928% ( 103) 00:07:51.806 8267.618 - 8318.031: 88.6899% ( 116) 00:07:51.806 8318.031 - 8368.443: 89.3089% ( 103) 00:07:51.806 8368.443 - 8418.855: 89.8017% ( 82) 00:07:51.806 8418.855 - 8469.268: 90.1863% ( 64) 00:07:51.806 8469.268 - 8519.680: 90.5288% ( 57) 00:07:51.806 8519.680 - 8570.092: 90.7752% ( 41) 00:07:51.806 8570.092 - 8620.505: 91.0276% ( 42) 00:07:51.806 8620.505 - 8670.917: 91.4062% ( 63) 00:07:51.806 8670.917 - 8721.329: 91.7668% ( 60) 00:07:51.806 8721.329 - 8771.742: 92.1575% ( 65) 00:07:51.806 8771.742 - 8822.154: 92.6743% ( 86) 00:07:51.806 8822.154 - 8872.566: 92.8966% ( 37) 00:07:51.806 8872.566 - 8922.978: 93.1010% ( 34) 00:07:51.806 8922.978 - 8973.391: 93.4255% ( 54) 00:07:51.806 8973.391 - 9023.803: 93.7079% ( 47) 00:07:51.806 9023.803 - 9074.215: 93.9002% ( 32) 00:07:51.806 9074.215 - 9124.628: 94.1526% ( 42) 00:07:51.806 9124.628 - 9175.040: 94.3510% ( 33) 00:07:51.806 9175.040 - 9225.452: 94.5553% ( 34) 00:07:51.806 9225.452 - 9275.865: 94.7837% ( 38) 00:07:51.806 9275.865 - 9326.277: 95.0541% ( 45) 00:07:51.806 9326.277 - 9376.689: 95.2163% ( 27) 00:07:51.806 9376.689 - 9427.102: 95.4387% ( 37) 00:07:51.806 9427.102 - 9477.514: 95.6550% ( 36) 00:07:51.806 9477.514 - 9527.926: 95.8534% ( 33) 00:07:51.806 9527.926 - 9578.338: 96.1118% ( 43) 00:07:51.806 9578.338 - 9628.751: 96.3041% ( 32) 00:07:51.806 9628.751 - 9679.163: 96.4543% ( 25) 00:07:51.806 9679.163 - 9729.575: 96.5745% ( 20) 00:07:51.806 9729.575 - 9779.988: 96.6827% ( 18) 00:07:51.806 9779.988 - 9830.400: 96.7368% ( 9) 00:07:51.806 9830.400 - 9880.812: 96.7788% ( 7) 00:07:51.806 9880.812 - 9931.225: 96.8149% ( 6) 00:07:51.806 9931.225 - 9981.637: 96.8570% ( 7) 00:07:51.806 9981.637 - 10032.049: 96.8930% ( 6) 00:07:51.806 10032.049 - 10082.462: 96.9411% ( 8) 00:07:51.806 10082.462 - 10132.874: 96.9772% ( 6) 00:07:51.806 10132.874 - 10183.286: 97.0192% ( 7) 00:07:51.806 10183.286 - 10233.698: 97.0493% ( 5) 00:07:51.806 10233.698 - 10284.111: 97.0974% ( 8) 00:07:51.806 10284.111 - 10334.523: 97.2296% ( 22) 00:07:51.806 10334.523 - 10384.935: 97.2897% ( 10) 00:07:51.806 10384.935 - 10435.348: 97.3377% ( 8) 00:07:51.806 10435.348 - 10485.760: 97.5361% ( 33) 00:07:51.806 10485.760 - 10536.172: 97.5721% ( 6) 00:07:51.806 10536.172 - 10586.585: 97.6022% ( 5) 00:07:51.806 10586.585 - 10636.997: 97.6502% ( 8) 00:07:51.806 10636.997 - 10687.409: 97.6863% ( 6) 00:07:51.806 10687.409 - 10737.822: 97.6923% ( 1) 00:07:51.806 11443.594 - 11494.006: 97.7163% ( 4) 00:07:51.806 11494.006 - 11544.418: 97.7344% ( 3) 00:07:51.806 11544.418 - 11594.831: 97.7584% ( 4) 00:07:51.806 11594.831 - 11645.243: 97.7825% ( 4) 00:07:51.806 11645.243 - 11695.655: 97.8005% ( 3) 00:07:51.806 11695.655 - 11746.068: 97.8425% ( 7) 00:07:51.806 11746.068 - 11796.480: 97.8966% ( 9) 00:07:51.806 11796.480 - 11846.892: 97.9327% ( 6) 00:07:51.806 11846.892 - 11897.305: 97.9447% ( 2) 00:07:51.806 11897.305 - 11947.717: 97.9567% ( 2) 00:07:51.806 11947.717 - 11998.129: 97.9688% ( 2) 00:07:51.806 11998.129 - 12048.542: 97.9748% ( 1) 00:07:51.807 12048.542 - 12098.954: 97.9868% ( 2) 00:07:51.807 12098.954 - 12149.366: 98.0409% ( 9) 00:07:51.807 12149.366 - 12199.778: 98.1430% ( 17) 00:07:51.807 12199.778 - 12250.191: 98.2692% ( 21) 00:07:51.807 12250.191 - 12300.603: 98.3594% ( 15) 00:07:51.807 12300.603 - 12351.015: 98.3954% ( 6) 00:07:51.807 12351.015 - 12401.428: 98.4495% ( 9) 00:07:51.807 12401.428 - 12451.840: 98.4976% ( 8) 00:07:51.807 12451.840 - 12502.252: 98.5397% ( 7) 00:07:51.807 12502.252 - 12552.665: 98.5817% ( 7) 00:07:51.807 12552.665 - 12603.077: 98.6298% ( 8) 00:07:51.807 12603.077 - 12653.489: 98.6719% ( 7) 00:07:51.807 12653.489 - 12703.902: 98.7019% ( 5) 00:07:51.807 12703.902 - 12754.314: 98.7260% ( 4) 00:07:51.807 12754.314 - 12804.726: 98.7500% ( 4) 00:07:51.807 12804.726 - 12855.138: 98.7680% ( 3) 00:07:51.807 12855.138 - 12905.551: 98.7921% ( 4) 00:07:51.807 12905.551 - 13006.375: 98.8341% ( 7) 00:07:51.807 13006.375 - 13107.200: 98.8462% ( 2) 00:07:51.807 13409.674 - 13510.498: 98.8642% ( 3) 00:07:51.807 13510.498 - 13611.323: 98.8882% ( 4) 00:07:51.807 13611.323 - 13712.148: 98.9002% ( 2) 00:07:51.807 13712.148 - 13812.972: 98.9303% ( 5) 00:07:51.807 13812.972 - 13913.797: 98.9724% ( 7) 00:07:51.807 13913.797 - 14014.622: 99.1947% ( 37) 00:07:51.807 14014.622 - 14115.446: 99.2188% ( 4) 00:07:51.807 14115.446 - 14216.271: 99.2308% ( 2) 00:07:51.807 17341.834 - 17442.658: 99.2368% ( 1) 00:07:51.807 17543.483 - 17644.308: 99.2428% ( 1) 00:07:51.807 17644.308 - 17745.132: 99.2608% ( 3) 00:07:51.807 17745.132 - 17845.957: 99.2788% ( 3) 00:07:51.807 17845.957 - 17946.782: 99.3029% ( 4) 00:07:51.807 17946.782 - 18047.606: 99.3269% ( 4) 00:07:51.807 18047.606 - 18148.431: 99.3510% ( 4) 00:07:51.807 18148.431 - 18249.255: 99.3750% ( 4) 00:07:51.807 18249.255 - 18350.080: 99.3930% ( 3) 00:07:51.807 18350.080 - 18450.905: 99.4111% ( 3) 00:07:51.807 18450.905 - 18551.729: 99.4351% ( 4) 00:07:51.807 18551.729 - 18652.554: 99.4591% ( 4) 00:07:51.807 18652.554 - 18753.378: 99.4832% ( 4) 00:07:51.807 18753.378 - 18854.203: 99.5012% ( 3) 00:07:51.807 18854.203 - 18955.028: 99.5252% ( 4) 00:07:51.807 18955.028 - 19055.852: 99.5433% ( 3) 00:07:51.807 19055.852 - 19156.677: 99.5673% ( 4) 00:07:51.807 19156.677 - 19257.502: 99.5793% ( 2) 00:07:51.807 19257.502 - 19358.326: 99.6034% ( 4) 00:07:51.807 19358.326 - 19459.151: 99.6154% ( 2) 00:07:51.807 25407.803 - 25508.628: 99.6334% ( 3) 00:07:51.807 25508.628 - 25609.452: 99.6695% ( 6) 00:07:51.807 25609.452 - 25710.277: 99.6995% ( 5) 00:07:51.807 25710.277 - 25811.102: 99.7356% ( 6) 00:07:51.807 25811.102 - 26012.751: 99.8738% ( 23) 00:07:51.807 26012.751 - 26214.400: 99.9519% ( 13) 00:07:51.807 26214.400 - 26416.049: 99.9760% ( 4) 00:07:51.807 26416.049 - 26617.698: 100.0000% ( 4) 00:07:51.807 00:07:51.807 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.807 ============================================================================== 00:07:51.807 Range in us Cumulative IO count 00:07:51.807 4083.397 - 4108.603: 0.0120% ( 2) 00:07:51.807 4108.603 - 4133.809: 0.0421% ( 5) 00:07:51.807 4133.809 - 4159.015: 0.0601% ( 3) 00:07:51.807 4159.015 - 4184.222: 0.1202% ( 10) 00:07:51.807 4184.222 - 4209.428: 0.1803% ( 10) 00:07:51.807 4209.428 - 4234.634: 0.2284% ( 8) 00:07:51.807 4234.634 - 4259.840: 0.2404% ( 2) 00:07:51.807 4259.840 - 4285.046: 0.2524% ( 2) 00:07:51.807 4285.046 - 4310.252: 0.2704% ( 3) 00:07:51.807 4310.252 - 4335.458: 0.2825% ( 2) 00:07:51.807 4335.458 - 4360.665: 0.2945% ( 2) 00:07:51.807 4360.665 - 4385.871: 0.3125% ( 3) 00:07:51.807 4385.871 - 4411.077: 0.3245% ( 2) 00:07:51.807 4411.077 - 4436.283: 0.3365% ( 2) 00:07:51.807 4436.283 - 4461.489: 0.3486% ( 2) 00:07:51.807 4461.489 - 4486.695: 0.3666% ( 3) 00:07:51.807 4486.695 - 4511.902: 0.3786% ( 2) 00:07:51.807 4511.902 - 4537.108: 0.3846% ( 1) 00:07:51.807 6200.714 - 6225.920: 0.3966% ( 2) 00:07:51.807 6225.920 - 6251.126: 0.4147% ( 3) 00:07:51.807 6251.126 - 6276.332: 0.4327% ( 3) 00:07:51.807 6276.332 - 6301.538: 0.4688% ( 6) 00:07:51.807 6301.538 - 6326.745: 0.5108% ( 7) 00:07:51.807 6326.745 - 6351.951: 0.5589% ( 8) 00:07:51.807 6351.951 - 6377.157: 0.7212% ( 27) 00:07:51.807 6377.157 - 6402.363: 0.7933% ( 12) 00:07:51.807 6402.363 - 6427.569: 0.9315% ( 23) 00:07:51.807 6427.569 - 6452.775: 1.0036% ( 12) 00:07:51.807 6452.775 - 6503.188: 1.1418% ( 23) 00:07:51.807 6503.188 - 6553.600: 1.4423% ( 50) 00:07:51.807 6553.600 - 6604.012: 1.6286% ( 31) 00:07:51.807 6604.012 - 6654.425: 1.8450% ( 36) 00:07:51.807 6654.425 - 6704.837: 2.2175% ( 62) 00:07:51.807 6704.837 - 6755.249: 2.7825% ( 94) 00:07:51.807 6755.249 - 6805.662: 4.0986% ( 219) 00:07:51.807 6805.662 - 6856.074: 5.8233% ( 287) 00:07:51.807 6856.074 - 6906.486: 7.7464% ( 320) 00:07:51.807 6906.486 - 6956.898: 10.1923% ( 407) 00:07:51.807 6956.898 - 7007.311: 13.7200% ( 587) 00:07:51.807 7007.311 - 7057.723: 17.2837% ( 593) 00:07:51.807 7057.723 - 7108.135: 21.8029% ( 752) 00:07:51.807 7108.135 - 7158.548: 25.8834% ( 679) 00:07:51.807 7158.548 - 7208.960: 30.4748% ( 764) 00:07:51.807 7208.960 - 7259.372: 35.9435% ( 910) 00:07:51.807 7259.372 - 7309.785: 41.7728% ( 970) 00:07:51.807 7309.785 - 7360.197: 47.2897% ( 918) 00:07:51.807 7360.197 - 7410.609: 52.4159% ( 853) 00:07:51.807 7410.609 - 7461.022: 57.3558% ( 822) 00:07:51.807 7461.022 - 7511.434: 62.1454% ( 797) 00:07:51.807 7511.434 - 7561.846: 65.9736% ( 637) 00:07:51.807 7561.846 - 7612.258: 69.5312% ( 592) 00:07:51.807 7612.258 - 7662.671: 71.8209% ( 381) 00:07:51.807 7662.671 - 7713.083: 74.2368% ( 402) 00:07:51.807 7713.083 - 7763.495: 76.6286% ( 398) 00:07:51.807 7763.495 - 7813.908: 79.1106% ( 413) 00:07:51.807 7813.908 - 7864.320: 80.9856% ( 312) 00:07:51.807 7864.320 - 7914.732: 82.3438% ( 226) 00:07:51.807 7914.732 - 7965.145: 83.7861% ( 240) 00:07:51.807 7965.145 - 8015.557: 84.8738% ( 181) 00:07:51.807 8015.557 - 8065.969: 85.5108% ( 106) 00:07:51.807 8065.969 - 8116.382: 86.0938% ( 97) 00:07:51.807 8116.382 - 8166.794: 86.8149% ( 120) 00:07:51.807 8166.794 - 8217.206: 87.2716% ( 76) 00:07:51.807 8217.206 - 8267.618: 87.9387% ( 111) 00:07:51.807 8267.618 - 8318.031: 88.3774% ( 73) 00:07:51.807 8318.031 - 8368.443: 88.8161% ( 73) 00:07:51.807 8368.443 - 8418.855: 89.3810% ( 94) 00:07:51.807 8418.855 - 8469.268: 89.8558% ( 79) 00:07:51.807 8469.268 - 8519.680: 90.3005% ( 74) 00:07:51.807 8519.680 - 8570.092: 90.5709% ( 45) 00:07:51.807 8570.092 - 8620.505: 90.9796% ( 68) 00:07:51.807 8620.505 - 8670.917: 91.3822% ( 67) 00:07:51.807 8670.917 - 8721.329: 91.7788% ( 66) 00:07:51.807 8721.329 - 8771.742: 92.1214% ( 57) 00:07:51.807 8771.742 - 8822.154: 92.6502% ( 88) 00:07:51.807 8822.154 - 8872.566: 92.9026% ( 42) 00:07:51.807 8872.566 - 8922.978: 93.2212% ( 53) 00:07:51.807 8922.978 - 8973.391: 93.3954% ( 29) 00:07:51.807 8973.391 - 9023.803: 93.5817% ( 31) 00:07:51.807 9023.803 - 9074.215: 93.8642% ( 47) 00:07:51.807 9074.215 - 9124.628: 94.2127% ( 58) 00:07:51.807 9124.628 - 9175.040: 94.5974% ( 64) 00:07:51.807 9175.040 - 9225.452: 94.8498% ( 42) 00:07:51.807 9225.452 - 9275.865: 95.0361% ( 31) 00:07:51.807 9275.865 - 9326.277: 95.2043% ( 28) 00:07:51.807 9326.277 - 9376.689: 95.4026% ( 33) 00:07:51.807 9376.689 - 9427.102: 95.5228% ( 20) 00:07:51.807 9427.102 - 9477.514: 95.6791% ( 26) 00:07:51.807 9477.514 - 9527.926: 95.8774% ( 33) 00:07:51.807 9527.926 - 9578.338: 96.0096% ( 22) 00:07:51.807 9578.338 - 9628.751: 96.0998% ( 15) 00:07:51.807 9628.751 - 9679.163: 96.1959% ( 16) 00:07:51.807 9679.163 - 9729.575: 96.2921% ( 16) 00:07:51.807 9729.575 - 9779.988: 96.4483% ( 26) 00:07:51.807 9779.988 - 9830.400: 96.4964% ( 8) 00:07:51.807 9830.400 - 9880.812: 96.5385% ( 7) 00:07:51.807 9880.812 - 9931.225: 96.5925% ( 9) 00:07:51.807 9931.225 - 9981.637: 96.6647% ( 12) 00:07:51.807 9981.637 - 10032.049: 96.7188% ( 9) 00:07:51.807 10032.049 - 10082.462: 96.8029% ( 14) 00:07:51.807 10082.462 - 10132.874: 96.8690% ( 11) 00:07:51.807 10132.874 - 10183.286: 96.9111% ( 7) 00:07:51.807 10183.286 - 10233.698: 96.9471% ( 6) 00:07:51.807 10233.698 - 10284.111: 97.0493% ( 17) 00:07:51.807 10284.111 - 10334.523: 97.1815% ( 22) 00:07:51.807 10334.523 - 10384.935: 97.2115% ( 5) 00:07:51.807 10384.935 - 10435.348: 97.2416% ( 5) 00:07:51.807 10435.348 - 10485.760: 97.2596% ( 3) 00:07:51.807 10485.760 - 10536.172: 97.2837% ( 4) 00:07:51.807 10536.172 - 10586.585: 97.3017% ( 3) 00:07:51.807 10636.997 - 10687.409: 97.3317% ( 5) 00:07:51.807 10687.409 - 10737.822: 97.3438% ( 2) 00:07:51.807 10737.822 - 10788.234: 97.4038% ( 10) 00:07:51.807 10788.234 - 10838.646: 97.4399% ( 6) 00:07:51.807 10838.646 - 10889.058: 97.5120% ( 12) 00:07:51.807 10889.058 - 10939.471: 97.5721% ( 10) 00:07:51.807 10939.471 - 10989.883: 97.7404% ( 28) 00:07:51.807 10989.883 - 11040.295: 97.7945% ( 9) 00:07:51.807 11040.295 - 11090.708: 97.8786% ( 14) 00:07:51.807 11090.708 - 11141.120: 97.9087% ( 5) 00:07:51.807 11141.120 - 11191.532: 97.9327% ( 4) 00:07:51.807 11191.532 - 11241.945: 97.9627% ( 5) 00:07:51.807 11241.945 - 11292.357: 97.9748% ( 2) 00:07:51.807 11292.357 - 11342.769: 97.9808% ( 1) 00:07:51.807 11342.769 - 11393.182: 97.9928% ( 2) 00:07:51.807 11393.182 - 11443.594: 98.0228% ( 5) 00:07:51.807 11443.594 - 11494.006: 98.0589% ( 6) 00:07:51.807 11494.006 - 11544.418: 98.0889% ( 5) 00:07:51.807 11544.418 - 11594.831: 98.1310% ( 7) 00:07:51.807 11594.831 - 11645.243: 98.1971% ( 11) 00:07:51.807 11645.243 - 11695.655: 98.2572% ( 10) 00:07:51.807 11695.655 - 11746.068: 98.2752% ( 3) 00:07:51.808 11746.068 - 11796.480: 98.2933% ( 3) 00:07:51.808 11796.480 - 11846.892: 98.3053% ( 2) 00:07:51.808 11846.892 - 11897.305: 98.3173% ( 2) 00:07:51.808 11897.305 - 11947.717: 98.3293% ( 2) 00:07:51.808 11947.717 - 11998.129: 98.3353% ( 1) 00:07:51.808 11998.129 - 12048.542: 98.3534% ( 3) 00:07:51.808 12048.542 - 12098.954: 98.3894% ( 6) 00:07:51.808 12098.954 - 12149.366: 98.4195% ( 5) 00:07:51.808 12149.366 - 12199.778: 98.4555% ( 6) 00:07:51.808 12199.778 - 12250.191: 98.4796% ( 4) 00:07:51.808 12250.191 - 12300.603: 98.5156% ( 6) 00:07:51.808 12300.603 - 12351.015: 98.5457% ( 5) 00:07:51.808 12351.015 - 12401.428: 98.5817% ( 6) 00:07:51.808 12401.428 - 12451.840: 98.6058% ( 4) 00:07:51.808 12451.840 - 12502.252: 98.6418% ( 6) 00:07:51.808 12502.252 - 12552.665: 98.6719% ( 5) 00:07:51.808 12552.665 - 12603.077: 98.7019% ( 5) 00:07:51.808 12603.077 - 12653.489: 98.7200% ( 3) 00:07:51.808 12653.489 - 12703.902: 98.7440% ( 4) 00:07:51.808 12703.902 - 12754.314: 98.7620% ( 3) 00:07:51.808 12754.314 - 12804.726: 98.7861% ( 4) 00:07:51.808 12804.726 - 12855.138: 98.8041% ( 3) 00:07:51.808 12855.138 - 12905.551: 98.8281% ( 4) 00:07:51.808 12905.551 - 13006.375: 98.8462% ( 3) 00:07:51.808 13712.148 - 13812.972: 98.9183% ( 12) 00:07:51.808 13812.972 - 13913.797: 99.1166% ( 33) 00:07:51.808 13913.797 - 14014.622: 99.1827% ( 11) 00:07:51.808 14014.622 - 14115.446: 99.2248% ( 7) 00:07:51.808 14115.446 - 14216.271: 99.2308% ( 1) 00:07:51.808 17442.658 - 17543.483: 99.2368% ( 1) 00:07:51.808 17946.782 - 18047.606: 99.2488% ( 2) 00:07:51.808 18047.606 - 18148.431: 99.2668% ( 3) 00:07:51.808 18148.431 - 18249.255: 99.2849% ( 3) 00:07:51.808 18249.255 - 18350.080: 99.3089% ( 4) 00:07:51.808 18350.080 - 18450.905: 99.3329% ( 4) 00:07:51.808 18450.905 - 18551.729: 99.3570% ( 4) 00:07:51.808 18551.729 - 18652.554: 99.3810% ( 4) 00:07:51.808 18652.554 - 18753.378: 99.4050% ( 4) 00:07:51.808 18753.378 - 18854.203: 99.4291% ( 4) 00:07:51.808 18854.203 - 18955.028: 99.4531% ( 4) 00:07:51.808 18955.028 - 19055.852: 99.4712% ( 3) 00:07:51.808 19055.852 - 19156.677: 99.4952% ( 4) 00:07:51.808 19156.677 - 19257.502: 99.5192% ( 4) 00:07:51.808 19257.502 - 19358.326: 99.5373% ( 3) 00:07:51.808 19358.326 - 19459.151: 99.5553% ( 3) 00:07:51.808 19459.151 - 19559.975: 99.5673% ( 2) 00:07:51.808 19559.975 - 19660.800: 99.5913% ( 4) 00:07:51.808 19660.800 - 19761.625: 99.6034% ( 2) 00:07:51.808 19761.625 - 19862.449: 99.6154% ( 2) 00:07:51.808 25004.505 - 25105.329: 99.6394% ( 4) 00:07:51.808 25105.329 - 25206.154: 99.6755% ( 6) 00:07:51.808 25206.154 - 25306.978: 99.7115% ( 6) 00:07:51.808 25306.978 - 25407.803: 99.9038% ( 32) 00:07:51.808 25407.803 - 25508.628: 99.9519% ( 8) 00:07:51.808 25508.628 - 25609.452: 99.9940% ( 7) 00:07:51.808 25609.452 - 25710.277: 100.0000% ( 1) 00:07:51.808 00:07:51.808 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.808 ============================================================================== 00:07:51.808 Range in us Cumulative IO count 00:07:51.808 3680.098 - 3705.305: 0.0060% ( 1) 00:07:51.808 3730.511 - 3755.717: 0.0180% ( 2) 00:07:51.808 3755.717 - 3780.923: 0.0481% ( 5) 00:07:51.808 3780.923 - 3806.129: 0.0901% ( 7) 00:07:51.808 3806.129 - 3831.335: 0.1262% ( 6) 00:07:51.808 3831.335 - 3856.542: 0.1863% ( 10) 00:07:51.808 3856.542 - 3881.748: 0.2344% ( 8) 00:07:51.808 3881.748 - 3906.954: 0.2524% ( 3) 00:07:51.808 3906.954 - 3932.160: 0.2644% ( 2) 00:07:51.808 3932.160 - 3957.366: 0.2825% ( 3) 00:07:51.808 3957.366 - 3982.572: 0.2945% ( 2) 00:07:51.808 3982.572 - 4007.778: 0.3065% ( 2) 00:07:51.808 4007.778 - 4032.985: 0.3245% ( 3) 00:07:51.808 4032.985 - 4058.191: 0.3365% ( 2) 00:07:51.808 4058.191 - 4083.397: 0.3486% ( 2) 00:07:51.808 4083.397 - 4108.603: 0.3606% ( 2) 00:07:51.808 4108.603 - 4133.809: 0.3786% ( 3) 00:07:51.808 4133.809 - 4159.015: 0.3846% ( 1) 00:07:51.808 5923.446 - 5948.652: 0.3906% ( 1) 00:07:51.808 6074.683 - 6099.889: 0.3966% ( 1) 00:07:51.808 6099.889 - 6125.095: 0.4207% ( 4) 00:07:51.808 6125.095 - 6150.302: 0.4507% ( 5) 00:07:51.808 6150.302 - 6175.508: 0.4928% ( 7) 00:07:51.808 6175.508 - 6200.714: 0.5409% ( 8) 00:07:51.808 6200.714 - 6225.920: 0.6130% ( 12) 00:07:51.808 6225.920 - 6251.126: 0.6370% ( 4) 00:07:51.808 6251.126 - 6276.332: 0.6611% ( 4) 00:07:51.808 6276.332 - 6301.538: 0.6911% ( 5) 00:07:51.808 6301.538 - 6326.745: 0.7212% ( 5) 00:07:51.808 6326.745 - 6351.951: 0.7572% ( 6) 00:07:51.808 6351.951 - 6377.157: 0.8053% ( 8) 00:07:51.808 6377.157 - 6402.363: 0.8474% ( 7) 00:07:51.808 6402.363 - 6427.569: 0.8894% ( 7) 00:07:51.808 6427.569 - 6452.775: 0.9495% ( 10) 00:07:51.808 6452.775 - 6503.188: 1.0877% ( 23) 00:07:51.808 6503.188 - 6553.600: 1.2200% ( 22) 00:07:51.808 6553.600 - 6604.012: 1.3882% ( 28) 00:07:51.808 6604.012 - 6654.425: 1.8209% ( 72) 00:07:51.808 6654.425 - 6704.837: 2.2296% ( 68) 00:07:51.808 6704.837 - 6755.249: 2.8185% ( 98) 00:07:51.808 6755.249 - 6805.662: 3.8762% ( 176) 00:07:51.808 6805.662 - 6856.074: 5.1442% ( 211) 00:07:51.808 6856.074 - 6906.486: 6.8149% ( 278) 00:07:51.808 6906.486 - 6956.898: 9.6514% ( 472) 00:07:51.808 6956.898 - 7007.311: 13.1911% ( 589) 00:07:51.808 7007.311 - 7057.723: 17.3077% ( 685) 00:07:51.808 7057.723 - 7108.135: 21.5745% ( 710) 00:07:51.808 7108.135 - 7158.548: 26.6947% ( 852) 00:07:51.808 7158.548 - 7208.960: 32.0433% ( 890) 00:07:51.808 7208.960 - 7259.372: 37.3257% ( 879) 00:07:51.808 7259.372 - 7309.785: 42.3498% ( 836) 00:07:51.808 7309.785 - 7360.197: 47.3918% ( 839) 00:07:51.808 7360.197 - 7410.609: 52.7043% ( 884) 00:07:51.808 7410.609 - 7461.022: 57.3377% ( 771) 00:07:51.808 7461.022 - 7511.434: 61.5805% ( 706) 00:07:51.808 7511.434 - 7561.846: 65.8834% ( 716) 00:07:51.808 7561.846 - 7612.258: 69.2668% ( 563) 00:07:51.808 7612.258 - 7662.671: 72.1695% ( 483) 00:07:51.808 7662.671 - 7713.083: 74.8978% ( 454) 00:07:51.808 7713.083 - 7763.495: 76.9231% ( 337) 00:07:51.808 7763.495 - 7813.908: 78.6118% ( 281) 00:07:51.808 7813.908 - 7864.320: 79.8798% ( 211) 00:07:51.808 7864.320 - 7914.732: 81.0457% ( 194) 00:07:51.808 7914.732 - 7965.145: 82.2175% ( 195) 00:07:51.808 7965.145 - 8015.557: 83.2091% ( 165) 00:07:51.808 8015.557 - 8065.969: 84.1647% ( 159) 00:07:51.808 8065.969 - 8116.382: 85.1142% ( 158) 00:07:51.808 8116.382 - 8166.794: 85.9315% ( 136) 00:07:51.808 8166.794 - 8217.206: 86.8630% ( 155) 00:07:51.808 8217.206 - 8267.618: 87.6442% ( 130) 00:07:51.808 8267.618 - 8318.031: 88.4075% ( 127) 00:07:51.808 8318.031 - 8368.443: 88.9844% ( 96) 00:07:51.808 8368.443 - 8418.855: 89.3690% ( 64) 00:07:51.808 8418.855 - 8469.268: 89.8257% ( 76) 00:07:51.808 8469.268 - 8519.680: 90.2764% ( 75) 00:07:51.808 8519.680 - 8570.092: 90.6430% ( 61) 00:07:51.808 8570.092 - 8620.505: 91.1118% ( 78) 00:07:51.808 8620.505 - 8670.917: 91.5204% ( 68) 00:07:51.808 8670.917 - 8721.329: 91.7488% ( 38) 00:07:51.808 8721.329 - 8771.742: 92.0974% ( 58) 00:07:51.808 8771.742 - 8822.154: 92.5661% ( 78) 00:07:51.808 8822.154 - 8872.566: 93.0649% ( 83) 00:07:51.808 8872.566 - 8922.978: 93.6719% ( 101) 00:07:51.808 8922.978 - 8973.391: 94.0925% ( 70) 00:07:51.808 8973.391 - 9023.803: 94.3089% ( 36) 00:07:51.808 9023.803 - 9074.215: 94.4712% ( 27) 00:07:51.808 9074.215 - 9124.628: 94.6214% ( 25) 00:07:51.808 9124.628 - 9175.040: 94.8257% ( 34) 00:07:51.808 9175.040 - 9225.452: 94.9760% ( 25) 00:07:51.808 9225.452 - 9275.865: 95.1442% ( 28) 00:07:51.808 9275.865 - 9326.277: 95.4087% ( 44) 00:07:51.808 9326.277 - 9376.689: 95.6190% ( 35) 00:07:51.808 9376.689 - 9427.102: 95.7632% ( 24) 00:07:51.808 9427.102 - 9477.514: 95.8774% ( 19) 00:07:51.808 9477.514 - 9527.926: 95.9495% ( 12) 00:07:51.808 9527.926 - 9578.338: 96.0938% ( 24) 00:07:51.808 9578.338 - 9628.751: 96.1238% ( 5) 00:07:51.808 9628.751 - 9679.163: 96.2079% ( 14) 00:07:51.808 9679.163 - 9729.575: 96.3221% ( 19) 00:07:51.808 9729.575 - 9779.988: 96.4603% ( 23) 00:07:51.808 9779.988 - 9830.400: 96.6106% ( 25) 00:07:51.808 9830.400 - 9880.812: 96.7248% ( 19) 00:07:51.808 9880.812 - 9931.225: 96.7849% ( 10) 00:07:51.808 9931.225 - 9981.637: 96.8570% ( 12) 00:07:51.808 9981.637 - 10032.049: 96.9171% ( 10) 00:07:51.808 10032.049 - 10082.462: 96.9591% ( 7) 00:07:51.808 10082.462 - 10132.874: 97.0553% ( 16) 00:07:51.808 10132.874 - 10183.286: 97.0974% ( 7) 00:07:51.808 10183.286 - 10233.698: 97.1334% ( 6) 00:07:51.808 10233.698 - 10284.111: 97.1695% ( 6) 00:07:51.808 10284.111 - 10334.523: 97.2115% ( 7) 00:07:51.808 10334.523 - 10384.935: 97.2356% ( 4) 00:07:51.808 10384.935 - 10435.348: 97.2716% ( 6) 00:07:51.808 10435.348 - 10485.760: 97.3137% ( 7) 00:07:51.808 10485.760 - 10536.172: 97.3678% ( 9) 00:07:51.808 10536.172 - 10586.585: 97.4279% ( 10) 00:07:51.808 10586.585 - 10636.997: 97.5000% ( 12) 00:07:51.808 10636.997 - 10687.409: 97.5661% ( 11) 00:07:51.808 10687.409 - 10737.822: 97.6142% ( 8) 00:07:51.808 10737.822 - 10788.234: 97.6502% ( 6) 00:07:51.808 10788.234 - 10838.646: 97.6983% ( 8) 00:07:51.808 10838.646 - 10889.058: 97.7344% ( 6) 00:07:51.808 10889.058 - 10939.471: 97.8065% ( 12) 00:07:51.808 10939.471 - 10989.883: 97.8606% ( 9) 00:07:51.808 10989.883 - 11040.295: 97.9026% ( 7) 00:07:51.808 11040.295 - 11090.708: 97.9207% ( 3) 00:07:51.808 11090.708 - 11141.120: 97.9808% ( 10) 00:07:51.808 11141.120 - 11191.532: 98.0349% ( 9) 00:07:51.808 11191.532 - 11241.945: 98.0889% ( 9) 00:07:51.808 11241.945 - 11292.357: 98.1550% ( 11) 00:07:51.808 11292.357 - 11342.769: 98.3053% ( 25) 00:07:51.808 11342.769 - 11393.182: 98.3233% ( 3) 00:07:51.808 11393.182 - 11443.594: 98.3594% ( 6) 00:07:51.808 11443.594 - 11494.006: 98.3894% ( 5) 00:07:51.808 11494.006 - 11544.418: 98.4195% ( 5) 00:07:51.808 11544.418 - 11594.831: 98.4375% ( 3) 00:07:51.808 11594.831 - 11645.243: 98.4495% ( 2) 00:07:51.808 11645.243 - 11695.655: 98.4736% ( 4) 00:07:51.809 11695.655 - 11746.068: 98.4976% ( 4) 00:07:51.809 11746.068 - 11796.480: 98.5156% ( 3) 00:07:51.809 11796.480 - 11846.892: 98.5397% ( 4) 00:07:51.809 11846.892 - 11897.305: 98.5577% ( 3) 00:07:51.809 11897.305 - 11947.717: 98.5817% ( 4) 00:07:51.809 11947.717 - 11998.129: 98.5998% ( 3) 00:07:51.809 11998.129 - 12048.542: 98.6238% ( 4) 00:07:51.809 12048.542 - 12098.954: 98.6418% ( 3) 00:07:51.809 12098.954 - 12149.366: 98.6659% ( 4) 00:07:51.809 12149.366 - 12199.778: 98.6839% ( 3) 00:07:51.809 12199.778 - 12250.191: 98.7079% ( 4) 00:07:51.809 12250.191 - 12300.603: 98.7320% ( 4) 00:07:51.809 12300.603 - 12351.015: 98.7500% ( 3) 00:07:51.809 12351.015 - 12401.428: 98.7740% ( 4) 00:07:51.809 12401.428 - 12451.840: 98.7921% ( 3) 00:07:51.809 12451.840 - 12502.252: 98.8161% ( 4) 00:07:51.809 12502.252 - 12552.665: 98.8341% ( 3) 00:07:51.809 12552.665 - 12603.077: 98.8462% ( 2) 00:07:51.809 13611.323 - 13712.148: 98.8702% ( 4) 00:07:51.809 13712.148 - 13812.972: 98.9303% ( 10) 00:07:51.809 13812.972 - 13913.797: 99.1226% ( 32) 00:07:51.809 13913.797 - 14014.622: 99.1647% ( 7) 00:07:51.809 14014.622 - 14115.446: 99.2067% ( 7) 00:07:51.809 14115.446 - 14216.271: 99.2308% ( 4) 00:07:51.809 18249.255 - 18350.080: 99.2428% ( 2) 00:07:51.809 18350.080 - 18450.905: 99.2608% ( 3) 00:07:51.809 18450.905 - 18551.729: 99.2849% ( 4) 00:07:51.809 18551.729 - 18652.554: 99.3029% ( 3) 00:07:51.809 18652.554 - 18753.378: 99.3269% ( 4) 00:07:51.809 18753.378 - 18854.203: 99.3510% ( 4) 00:07:51.809 18854.203 - 18955.028: 99.3690% ( 3) 00:07:51.809 18955.028 - 19055.852: 99.3990% ( 5) 00:07:51.809 19055.852 - 19156.677: 99.4171% ( 3) 00:07:51.809 19156.677 - 19257.502: 99.4411% ( 4) 00:07:51.809 19257.502 - 19358.326: 99.4591% ( 3) 00:07:51.809 19358.326 - 19459.151: 99.4832% ( 4) 00:07:51.809 19459.151 - 19559.975: 99.5312% ( 8) 00:07:51.809 19559.975 - 19660.800: 99.5493% ( 3) 00:07:51.809 19660.800 - 19761.625: 99.5673% ( 3) 00:07:51.809 19761.625 - 19862.449: 99.5853% ( 3) 00:07:51.809 19862.449 - 19963.274: 99.6034% ( 3) 00:07:51.809 19963.274 - 20064.098: 99.6154% ( 2) 00:07:51.809 24399.557 - 24500.382: 99.6394% ( 4) 00:07:51.809 24500.382 - 24601.206: 99.6815% ( 7) 00:07:51.809 24601.206 - 24702.031: 99.7236% ( 7) 00:07:51.809 24702.031 - 24802.855: 99.8257% ( 17) 00:07:51.809 24802.855 - 24903.680: 99.9339% ( 18) 00:07:51.809 24903.680 - 25004.505: 99.9760% ( 7) 00:07:51.809 25004.505 - 25105.329: 100.0000% ( 4) 00:07:51.809 00:07:51.809 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.809 ============================================================================== 00:07:51.809 Range in us Cumulative IO count 00:07:51.809 3302.006 - 3327.212: 0.0060% ( 1) 00:07:51.809 3352.418 - 3377.625: 0.0180% ( 2) 00:07:51.809 3377.625 - 3402.831: 0.0421% ( 4) 00:07:51.809 3402.831 - 3428.037: 0.0721% ( 5) 00:07:51.809 3428.037 - 3453.243: 0.1502% ( 13) 00:07:51.809 3453.243 - 3478.449: 0.2103% ( 10) 00:07:51.809 3478.449 - 3503.655: 0.2344% ( 4) 00:07:51.809 3503.655 - 3528.862: 0.2524% ( 3) 00:07:51.809 3528.862 - 3554.068: 0.2644% ( 2) 00:07:51.809 3554.068 - 3579.274: 0.2764% ( 2) 00:07:51.809 3579.274 - 3604.480: 0.2885% ( 2) 00:07:51.809 3604.480 - 3629.686: 0.3005% ( 2) 00:07:51.809 3629.686 - 3654.892: 0.3125% ( 2) 00:07:51.809 3654.892 - 3680.098: 0.3245% ( 2) 00:07:51.809 3680.098 - 3705.305: 0.3425% ( 3) 00:07:51.809 3705.305 - 3730.511: 0.3546% ( 2) 00:07:51.809 3730.511 - 3755.717: 0.3666% ( 2) 00:07:51.809 3755.717 - 3780.923: 0.3846% ( 3) 00:07:51.809 5721.797 - 5747.003: 0.4026% ( 3) 00:07:51.809 5747.003 - 5772.209: 0.4207% ( 3) 00:07:51.809 5772.209 - 5797.415: 0.4447% ( 4) 00:07:51.809 5797.415 - 5822.622: 0.5168% ( 12) 00:07:51.809 5822.622 - 5847.828: 0.5889% ( 12) 00:07:51.809 5847.828 - 5873.034: 0.6250% ( 6) 00:07:51.809 5873.034 - 5898.240: 0.6430% ( 3) 00:07:51.809 5898.240 - 5923.446: 0.6550% ( 2) 00:07:51.809 5923.446 - 5948.652: 0.6671% ( 2) 00:07:51.809 5948.652 - 5973.858: 0.6851% ( 3) 00:07:51.809 5973.858 - 5999.065: 0.6971% ( 2) 00:07:51.809 5999.065 - 6024.271: 0.7091% ( 2) 00:07:51.809 6024.271 - 6049.477: 0.7212% ( 2) 00:07:51.809 6049.477 - 6074.683: 0.7332% ( 2) 00:07:51.809 6074.683 - 6099.889: 0.7512% ( 3) 00:07:51.809 6099.889 - 6125.095: 0.7632% ( 2) 00:07:51.809 6125.095 - 6150.302: 0.7692% ( 1) 00:07:51.809 6150.302 - 6175.508: 0.7873% ( 3) 00:07:51.809 6175.508 - 6200.714: 0.7993% ( 2) 00:07:51.809 6200.714 - 6225.920: 0.8173% ( 3) 00:07:51.809 6225.920 - 6251.126: 0.8233% ( 1) 00:07:51.809 6251.126 - 6276.332: 0.8413% ( 3) 00:07:51.809 6276.332 - 6301.538: 0.8594% ( 3) 00:07:51.809 6301.538 - 6326.745: 0.8774% ( 3) 00:07:51.809 6326.745 - 6351.951: 0.8954% ( 3) 00:07:51.809 6351.951 - 6377.157: 0.9075% ( 2) 00:07:51.809 6377.157 - 6402.363: 0.9255% ( 3) 00:07:51.809 6402.363 - 6427.569: 0.9435% ( 3) 00:07:51.809 6427.569 - 6452.775: 1.0156% ( 12) 00:07:51.809 6452.775 - 6503.188: 1.0938% ( 13) 00:07:51.809 6503.188 - 6553.600: 1.2260% ( 22) 00:07:51.809 6553.600 - 6604.012: 1.6406% ( 69) 00:07:51.809 6604.012 - 6654.425: 1.9772% ( 56) 00:07:51.809 6654.425 - 6704.837: 2.3678% ( 65) 00:07:51.809 6704.837 - 6755.249: 3.3413% ( 162) 00:07:51.809 6755.249 - 6805.662: 4.2428% ( 150) 00:07:51.809 6805.662 - 6856.074: 5.4567% ( 202) 00:07:51.809 6856.074 - 6906.486: 7.7404% ( 380) 00:07:51.809 6906.486 - 6956.898: 9.9219% ( 363) 00:07:51.809 6956.898 - 7007.311: 13.1731% ( 541) 00:07:51.809 7007.311 - 7057.723: 17.7764% ( 766) 00:07:51.809 7057.723 - 7108.135: 21.9171% ( 689) 00:07:51.809 7108.135 - 7158.548: 26.8089% ( 814) 00:07:51.809 7158.548 - 7208.960: 31.9832% ( 861) 00:07:51.809 7208.960 - 7259.372: 37.0252% ( 839) 00:07:51.809 7259.372 - 7309.785: 42.2776% ( 874) 00:07:51.809 7309.785 - 7360.197: 47.1995% ( 819) 00:07:51.809 7360.197 - 7410.609: 52.2536% ( 841) 00:07:51.809 7410.609 - 7461.022: 57.0853% ( 804) 00:07:51.809 7461.022 - 7511.434: 61.5325% ( 740) 00:07:51.809 7511.434 - 7561.846: 65.5349% ( 666) 00:07:51.809 7561.846 - 7612.258: 69.5072% ( 661) 00:07:51.809 7612.258 - 7662.671: 72.3618% ( 475) 00:07:51.809 7662.671 - 7713.083: 74.5252% ( 360) 00:07:51.809 7713.083 - 7763.495: 76.2800% ( 292) 00:07:51.809 7763.495 - 7813.908: 78.4075% ( 354) 00:07:51.809 7813.908 - 7864.320: 79.6635% ( 209) 00:07:51.809 7864.320 - 7914.732: 80.7873% ( 187) 00:07:51.809 7914.732 - 7965.145: 81.9231% ( 189) 00:07:51.809 7965.145 - 8015.557: 83.3714% ( 241) 00:07:51.809 8015.557 - 8065.969: 84.4712% ( 183) 00:07:51.809 8065.969 - 8116.382: 85.1562% ( 114) 00:07:51.809 8116.382 - 8166.794: 85.9916% ( 139) 00:07:51.809 8166.794 - 8217.206: 86.8870% ( 149) 00:07:51.809 8217.206 - 8267.618: 87.4399% ( 92) 00:07:51.809 8267.618 - 8318.031: 88.1430% ( 117) 00:07:51.809 8318.031 - 8368.443: 88.5577% ( 69) 00:07:51.809 8368.443 - 8418.855: 88.9002% ( 57) 00:07:51.809 8418.855 - 8469.268: 89.2368% ( 56) 00:07:51.809 8469.268 - 8519.680: 89.6935% ( 76) 00:07:51.809 8519.680 - 8570.092: 90.0541% ( 60) 00:07:51.809 8570.092 - 8620.505: 90.5469% ( 82) 00:07:51.809 8620.505 - 8670.917: 91.1839% ( 106) 00:07:51.809 8670.917 - 8721.329: 91.9111% ( 121) 00:07:51.809 8721.329 - 8771.742: 92.4219% ( 85) 00:07:51.809 8771.742 - 8822.154: 92.6803% ( 43) 00:07:51.809 8822.154 - 8872.566: 93.0889% ( 68) 00:07:51.809 8872.566 - 8922.978: 93.4075% ( 53) 00:07:51.809 8922.978 - 8973.391: 93.6899% ( 47) 00:07:51.809 8973.391 - 9023.803: 94.2007% ( 85) 00:07:51.809 9023.803 - 9074.215: 94.5012% ( 50) 00:07:51.809 9074.215 - 9124.628: 94.7356% ( 39) 00:07:51.809 9124.628 - 9175.040: 95.1382% ( 67) 00:07:51.809 9175.040 - 9225.452: 95.3365% ( 33) 00:07:51.809 9225.452 - 9275.865: 95.4627% ( 21) 00:07:51.809 9275.865 - 9326.277: 95.5589% ( 16) 00:07:51.809 9326.277 - 9376.689: 95.7151% ( 26) 00:07:51.809 9376.689 - 9427.102: 95.8293% ( 19) 00:07:51.809 9427.102 - 9477.514: 95.9435% ( 19) 00:07:51.809 9477.514 - 9527.926: 96.1719% ( 38) 00:07:51.809 9527.926 - 9578.338: 96.3882% ( 36) 00:07:51.809 9578.338 - 9628.751: 96.6106% ( 37) 00:07:51.809 9628.751 - 9679.163: 96.6707% ( 10) 00:07:51.809 9679.163 - 9729.575: 96.7188% ( 8) 00:07:51.809 9729.575 - 9779.988: 96.7788% ( 10) 00:07:51.809 9779.988 - 9830.400: 96.8389% ( 10) 00:07:51.809 9830.400 - 9880.812: 96.8990% ( 10) 00:07:51.809 9880.812 - 9931.225: 96.9832% ( 14) 00:07:51.809 9931.225 - 9981.637: 97.0373% ( 9) 00:07:51.809 9981.637 - 10032.049: 97.1094% ( 12) 00:07:51.809 10032.049 - 10082.462: 97.1695% ( 10) 00:07:51.809 10082.462 - 10132.874: 97.2776% ( 18) 00:07:51.809 10132.874 - 10183.286: 97.3678% ( 15) 00:07:51.809 10183.286 - 10233.698: 97.4339% ( 11) 00:07:51.809 10233.698 - 10284.111: 97.5060% ( 12) 00:07:51.809 10284.111 - 10334.523: 97.5541% ( 8) 00:07:51.809 10334.523 - 10384.935: 97.5841% ( 5) 00:07:51.809 10384.935 - 10435.348: 97.5962% ( 2) 00:07:51.809 10435.348 - 10485.760: 97.6082% ( 2) 00:07:51.809 10485.760 - 10536.172: 97.6202% ( 2) 00:07:51.809 10536.172 - 10586.585: 97.6322% ( 2) 00:07:51.809 10586.585 - 10636.997: 97.6382% ( 1) 00:07:51.809 10636.997 - 10687.409: 97.6502% ( 2) 00:07:51.809 10687.409 - 10737.822: 97.6623% ( 2) 00:07:51.809 10737.822 - 10788.234: 97.6683% ( 1) 00:07:51.809 10788.234 - 10838.646: 97.6803% ( 2) 00:07:51.809 10838.646 - 10889.058: 97.6863% ( 1) 00:07:51.809 10889.058 - 10939.471: 97.6923% ( 1) 00:07:51.809 10989.883 - 11040.295: 97.6983% ( 1) 00:07:51.809 11040.295 - 11090.708: 97.7344% ( 6) 00:07:51.809 11090.708 - 11141.120: 97.7704% ( 6) 00:07:51.809 11141.120 - 11191.532: 97.8005% ( 5) 00:07:51.809 11191.532 - 11241.945: 97.8305% ( 5) 00:07:51.809 11241.945 - 11292.357: 97.8726% ( 7) 00:07:51.809 11292.357 - 11342.769: 97.8966% ( 4) 00:07:51.810 11342.769 - 11393.182: 97.9327% ( 6) 00:07:51.810 11393.182 - 11443.594: 97.9688% ( 6) 00:07:51.810 11443.594 - 11494.006: 98.0048% ( 6) 00:07:51.810 11494.006 - 11544.418: 98.0288% ( 4) 00:07:51.810 11544.418 - 11594.831: 98.0829% ( 9) 00:07:51.810 11594.831 - 11645.243: 98.1430% ( 10) 00:07:51.810 11645.243 - 11695.655: 98.2272% ( 14) 00:07:51.810 11695.655 - 11746.068: 98.3173% ( 15) 00:07:51.810 11746.068 - 11796.480: 98.3894% ( 12) 00:07:51.810 11796.480 - 11846.892: 98.4555% ( 11) 00:07:51.810 11846.892 - 11897.305: 98.6118% ( 26) 00:07:51.810 11897.305 - 11947.717: 98.7440% ( 22) 00:07:51.810 11947.717 - 11998.129: 98.7740% ( 5) 00:07:51.810 11998.129 - 12048.542: 98.8041% ( 5) 00:07:51.810 12048.542 - 12098.954: 98.8281% ( 4) 00:07:51.810 12098.954 - 12149.366: 98.8462% ( 3) 00:07:51.810 13107.200 - 13208.025: 98.8702% ( 4) 00:07:51.810 13208.025 - 13308.849: 98.9123% ( 7) 00:07:51.810 13308.849 - 13409.674: 98.9543% ( 7) 00:07:51.810 13409.674 - 13510.498: 99.0325% ( 13) 00:07:51.810 13510.498 - 13611.323: 99.0805% ( 8) 00:07:51.810 13611.323 - 13712.148: 99.1106% ( 5) 00:07:51.810 13712.148 - 13812.972: 99.1406% ( 5) 00:07:51.810 13812.972 - 13913.797: 99.1707% ( 5) 00:07:51.810 13913.797 - 14014.622: 99.2007% ( 5) 00:07:51.810 14014.622 - 14115.446: 99.2248% ( 4) 00:07:51.810 14115.446 - 14216.271: 99.2308% ( 1) 00:07:51.810 18148.431 - 18249.255: 99.2428% ( 2) 00:07:51.810 18249.255 - 18350.080: 99.2668% ( 4) 00:07:51.810 18350.080 - 18450.905: 99.2909% ( 4) 00:07:51.810 18450.905 - 18551.729: 99.3089% ( 3) 00:07:51.810 18551.729 - 18652.554: 99.3269% ( 3) 00:07:51.810 18652.554 - 18753.378: 99.3510% ( 4) 00:07:51.810 18753.378 - 18854.203: 99.3750% ( 4) 00:07:51.810 18854.203 - 18955.028: 99.3930% ( 3) 00:07:51.810 18955.028 - 19055.852: 99.4231% ( 5) 00:07:51.810 19055.852 - 19156.677: 99.4471% ( 4) 00:07:51.810 19156.677 - 19257.502: 99.4712% ( 4) 00:07:51.810 19257.502 - 19358.326: 99.4892% ( 3) 00:07:51.810 19358.326 - 19459.151: 99.5132% ( 4) 00:07:51.810 19459.151 - 19559.975: 99.5312% ( 3) 00:07:51.810 19559.975 - 19660.800: 99.5493% ( 3) 00:07:51.810 19660.800 - 19761.625: 99.5733% ( 4) 00:07:51.810 19761.625 - 19862.449: 99.5853% ( 2) 00:07:51.810 19862.449 - 19963.274: 99.6034% ( 3) 00:07:51.810 19963.274 - 20064.098: 99.6154% ( 2) 00:07:51.810 23794.609 - 23895.434: 99.6334% ( 3) 00:07:51.810 23895.434 - 23996.258: 99.6454% ( 2) 00:07:51.810 23996.258 - 24097.083: 99.6995% ( 9) 00:07:51.810 24097.083 - 24197.908: 99.7416% ( 7) 00:07:51.810 24298.732 - 24399.557: 99.7536% ( 2) 00:07:51.810 24399.557 - 24500.382: 99.7837% ( 5) 00:07:51.810 24500.382 - 24601.206: 99.8197% ( 6) 00:07:51.810 24601.206 - 24702.031: 99.8558% ( 6) 00:07:51.810 24702.031 - 24802.855: 99.8918% ( 6) 00:07:51.810 24802.855 - 24903.680: 99.9339% ( 7) 00:07:51.810 24903.680 - 25004.505: 99.9639% ( 5) 00:07:51.810 25004.505 - 25105.329: 100.0000% ( 6) 00:07:51.810 00:07:51.810 ************************************ 00:07:51.810 END TEST nvme_perf 00:07:51.810 ************************************ 00:07:51.810 21:13:41 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:51.810 00:07:51.810 real 0m2.436s 00:07:51.810 user 0m2.175s 00:07:51.810 sys 0m0.157s 00:07:51.810 21:13:41 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.810 21:13:41 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:51.810 21:13:41 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:51.810 21:13:41 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:51.810 21:13:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.810 21:13:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.810 ************************************ 00:07:51.810 START TEST nvme_hello_world 00:07:51.810 ************************************ 00:07:51.810 21:13:41 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:51.810 Initializing NVMe Controllers 00:07:51.810 Attached to 0000:00:10.0 00:07:51.810 Namespace ID: 1 size: 6GB 00:07:51.810 Attached to 0000:00:11.0 00:07:51.810 Namespace ID: 1 size: 5GB 00:07:51.810 Attached to 0000:00:13.0 00:07:51.810 Namespace ID: 1 size: 1GB 00:07:51.810 Attached to 0000:00:12.0 00:07:51.810 Namespace ID: 1 size: 4GB 00:07:51.810 Namespace ID: 2 size: 4GB 00:07:51.810 Namespace ID: 3 size: 4GB 00:07:51.810 Initialization complete. 00:07:51.810 INFO: using host memory buffer for IO 00:07:51.810 Hello world! 00:07:51.810 INFO: using host memory buffer for IO 00:07:51.810 Hello world! 00:07:51.810 INFO: using host memory buffer for IO 00:07:51.810 Hello world! 00:07:51.810 INFO: using host memory buffer for IO 00:07:51.810 Hello world! 00:07:51.810 INFO: using host memory buffer for IO 00:07:51.810 Hello world! 00:07:51.810 INFO: using host memory buffer for IO 00:07:51.810 Hello world! 00:07:51.810 ************************************ 00:07:51.810 END TEST nvme_hello_world 00:07:51.810 ************************************ 00:07:51.810 00:07:51.810 real 0m0.178s 00:07:51.810 user 0m0.060s 00:07:51.810 sys 0m0.075s 00:07:51.810 21:13:41 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.810 21:13:41 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:51.810 21:13:41 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:51.810 21:13:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.810 21:13:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.810 21:13:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.810 ************************************ 00:07:51.810 START TEST nvme_sgl 00:07:51.810 ************************************ 00:07:51.810 21:13:41 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:52.068 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:52.068 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:52.068 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:52.068 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:52.068 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:52.068 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:52.068 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:52.068 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:52.068 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:52.068 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:52.068 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:52.068 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:52.068 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:52.068 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:52.068 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:52.068 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:52.068 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:52.068 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:52.068 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:52.069 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:52.069 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:52.069 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:52.069 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:52.069 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:52.069 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:52.069 NVMe Readv/Writev Request test 00:07:52.069 Attached to 0000:00:10.0 00:07:52.069 Attached to 0000:00:11.0 00:07:52.069 Attached to 0000:00:13.0 00:07:52.069 Attached to 0000:00:12.0 00:07:52.069 0000:00:10.0: build_io_request_2 test passed 00:07:52.069 0000:00:10.0: build_io_request_4 test passed 00:07:52.069 0000:00:10.0: build_io_request_5 test passed 00:07:52.069 0000:00:10.0: build_io_request_6 test passed 00:07:52.069 0000:00:10.0: build_io_request_7 test passed 00:07:52.069 0000:00:10.0: build_io_request_10 test passed 00:07:52.069 0000:00:11.0: build_io_request_2 test passed 00:07:52.069 0000:00:11.0: build_io_request_4 test passed 00:07:52.069 0000:00:11.0: build_io_request_5 test passed 00:07:52.069 0000:00:11.0: build_io_request_6 test passed 00:07:52.069 0000:00:11.0: build_io_request_7 test passed 00:07:52.069 0000:00:11.0: build_io_request_10 test passed 00:07:52.069 Cleaning up... 00:07:52.069 ************************************ 00:07:52.069 END TEST nvme_sgl 00:07:52.069 ************************************ 00:07:52.069 00:07:52.069 real 0m0.247s 00:07:52.069 user 0m0.120s 00:07:52.069 sys 0m0.081s 00:07:52.069 21:13:41 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.069 21:13:41 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:52.069 21:13:41 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:52.069 21:13:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.069 21:13:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.069 21:13:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.069 ************************************ 00:07:52.069 START TEST nvme_e2edp 00:07:52.069 ************************************ 00:07:52.069 21:13:41 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:52.327 NVMe Write/Read with End-to-End data protection test 00:07:52.327 Attached to 0000:00:10.0 00:07:52.327 Attached to 0000:00:11.0 00:07:52.327 Attached to 0000:00:13.0 00:07:52.327 Attached to 0000:00:12.0 00:07:52.327 Cleaning up... 00:07:52.327 00:07:52.327 real 0m0.168s 00:07:52.327 user 0m0.064s 00:07:52.327 sys 0m0.067s 00:07:52.327 21:13:41 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.327 21:13:41 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:52.327 ************************************ 00:07:52.327 END TEST nvme_e2edp 00:07:52.327 ************************************ 00:07:52.327 21:13:41 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:52.327 21:13:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.327 21:13:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.327 21:13:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.327 ************************************ 00:07:52.327 START TEST nvme_reserve 00:07:52.327 ************************************ 00:07:52.327 21:13:41 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:52.585 ===================================================== 00:07:52.585 NVMe Controller at PCI bus 0, device 16, function 0 00:07:52.585 ===================================================== 00:07:52.585 Reservations: Not Supported 00:07:52.585 ===================================================== 00:07:52.585 NVMe Controller at PCI bus 0, device 17, function 0 00:07:52.585 ===================================================== 00:07:52.585 Reservations: Not Supported 00:07:52.585 ===================================================== 00:07:52.585 NVMe Controller at PCI bus 0, device 19, function 0 00:07:52.585 ===================================================== 00:07:52.585 Reservations: Not Supported 00:07:52.585 ===================================================== 00:07:52.585 NVMe Controller at PCI bus 0, device 18, function 0 00:07:52.585 ===================================================== 00:07:52.585 Reservations: Not Supported 00:07:52.585 Reservation test passed 00:07:52.585 ************************************ 00:07:52.585 END TEST nvme_reserve 00:07:52.585 ************************************ 00:07:52.585 00:07:52.585 real 0m0.190s 00:07:52.585 user 0m0.062s 00:07:52.585 sys 0m0.077s 00:07:52.585 21:13:42 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.585 21:13:42 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:52.585 21:13:42 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:52.585 21:13:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.585 21:13:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.585 21:13:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.585 ************************************ 00:07:52.586 START TEST nvme_err_injection 00:07:52.586 ************************************ 00:07:52.586 21:13:42 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:52.844 NVMe Error Injection test 00:07:52.844 Attached to 0000:00:10.0 00:07:52.844 Attached to 0000:00:11.0 00:07:52.844 Attached to 0000:00:13.0 00:07:52.844 Attached to 0000:00:12.0 00:07:52.844 0000:00:13.0: get features failed as expected 00:07:52.844 0000:00:12.0: get features failed as expected 00:07:52.844 0000:00:10.0: get features failed as expected 00:07:52.844 0000:00:11.0: get features failed as expected 00:07:52.844 0000:00:10.0: get features successfully as expected 00:07:52.844 0000:00:11.0: get features successfully as expected 00:07:52.844 0000:00:13.0: get features successfully as expected 00:07:52.844 0000:00:12.0: get features successfully as expected 00:07:52.844 0000:00:10.0: read failed as expected 00:07:52.844 0000:00:11.0: read failed as expected 00:07:52.844 0000:00:13.0: read failed as expected 00:07:52.844 0000:00:12.0: read failed as expected 00:07:52.844 0000:00:10.0: read successfully as expected 00:07:52.844 0000:00:11.0: read successfully as expected 00:07:52.844 0000:00:13.0: read successfully as expected 00:07:52.844 0000:00:12.0: read successfully as expected 00:07:52.844 Cleaning up... 00:07:52.844 00:07:52.844 real 0m0.178s 00:07:52.844 user 0m0.062s 00:07:52.844 sys 0m0.075s 00:07:52.844 21:13:42 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.844 ************************************ 00:07:52.844 21:13:42 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:52.844 END TEST nvme_err_injection 00:07:52.844 ************************************ 00:07:52.844 21:13:42 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:52.844 21:13:42 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:52.844 21:13:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.844 21:13:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.844 ************************************ 00:07:52.844 START TEST nvme_overhead 00:07:52.844 ************************************ 00:07:52.844 21:13:42 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:54.219 Initializing NVMe Controllers 00:07:54.219 Attached to 0000:00:10.0 00:07:54.219 Attached to 0000:00:11.0 00:07:54.219 Attached to 0000:00:13.0 00:07:54.219 Attached to 0000:00:12.0 00:07:54.219 Initialization complete. Launching workers. 00:07:54.219 submit (in ns) avg, min, max = 11579.2, 10455.4, 223163.8 00:07:54.219 complete (in ns) avg, min, max = 7814.1, 7299.2, 271663.8 00:07:54.219 00:07:54.219 Submit histogram 00:07:54.219 ================ 00:07:54.219 Range in us Cumulative Count 00:07:54.219 10.437 - 10.486: 0.0062% ( 1) 00:07:54.219 10.535 - 10.585: 0.0125% ( 1) 00:07:54.219 10.732 - 10.782: 0.0249% ( 2) 00:07:54.219 10.782 - 10.831: 0.0498% ( 4) 00:07:54.219 10.831 - 10.880: 0.1806% ( 21) 00:07:54.219 10.880 - 10.929: 0.6600% ( 77) 00:07:54.219 10.929 - 10.978: 2.8765% ( 356) 00:07:54.219 10.978 - 11.028: 10.2111% ( 1178) 00:07:54.219 11.028 - 11.077: 25.7082% ( 2489) 00:07:54.219 11.077 - 11.126: 44.8291% ( 3071) 00:07:54.219 11.126 - 11.175: 62.0634% ( 2768) 00:07:54.219 11.175 - 11.225: 73.2146% ( 1791) 00:07:54.219 11.225 - 11.274: 79.8518% ( 1066) 00:07:54.219 11.274 - 11.323: 83.0770% ( 518) 00:07:54.219 11.323 - 11.372: 84.9511% ( 301) 00:07:54.219 11.372 - 11.422: 86.3707% ( 228) 00:07:54.219 11.422 - 11.471: 87.4665% ( 176) 00:07:54.219 11.471 - 11.520: 88.4565% ( 159) 00:07:54.219 11.520 - 11.569: 89.2410% ( 126) 00:07:54.219 11.569 - 11.618: 90.0068% ( 123) 00:07:54.219 11.618 - 11.668: 90.4365% ( 69) 00:07:54.219 11.668 - 11.717: 90.9159% ( 77) 00:07:54.219 11.717 - 11.766: 91.2832% ( 59) 00:07:54.219 11.766 - 11.815: 91.5821% ( 48) 00:07:54.219 11.815 - 11.865: 91.9370% ( 57) 00:07:54.219 11.865 - 11.914: 92.1860% ( 40) 00:07:54.219 11.914 - 11.963: 92.3977% ( 34) 00:07:54.219 11.963 - 12.012: 92.6343% ( 38) 00:07:54.219 12.012 - 12.062: 92.8273% ( 31) 00:07:54.219 12.062 - 12.111: 93.0951% ( 43) 00:07:54.219 12.111 - 12.160: 93.3130% ( 35) 00:07:54.219 12.160 - 12.209: 93.4749% ( 26) 00:07:54.219 12.209 - 12.258: 93.6305% ( 25) 00:07:54.219 12.258 - 12.308: 93.8422% ( 34) 00:07:54.219 12.308 - 12.357: 93.9481% ( 17) 00:07:54.219 12.357 - 12.406: 94.0166% ( 11) 00:07:54.219 12.406 - 12.455: 94.0851% ( 11) 00:07:54.219 12.455 - 12.505: 94.1286% ( 7) 00:07:54.219 12.505 - 12.554: 94.1598% ( 5) 00:07:54.219 12.554 - 12.603: 94.1971% ( 6) 00:07:54.219 12.603 - 12.702: 94.2283% ( 5) 00:07:54.219 12.702 - 12.800: 94.2656% ( 6) 00:07:54.219 12.800 - 12.898: 94.2967% ( 5) 00:07:54.219 12.898 - 12.997: 94.5396% ( 39) 00:07:54.219 12.997 - 13.095: 94.8197% ( 45) 00:07:54.219 13.095 - 13.194: 95.0937% ( 44) 00:07:54.219 13.194 - 13.292: 95.3241% ( 37) 00:07:54.219 13.292 - 13.391: 95.4424% ( 19) 00:07:54.219 13.391 - 13.489: 95.5171% ( 12) 00:07:54.219 13.489 - 13.588: 95.5980% ( 13) 00:07:54.219 13.588 - 13.686: 95.6541% ( 9) 00:07:54.219 13.686 - 13.785: 95.6914% ( 6) 00:07:54.219 13.785 - 13.883: 95.7475% ( 9) 00:07:54.219 13.883 - 13.982: 95.7724% ( 4) 00:07:54.219 13.982 - 14.080: 95.7973% ( 4) 00:07:54.219 14.080 - 14.178: 95.8284% ( 5) 00:07:54.219 14.178 - 14.277: 95.8533% ( 4) 00:07:54.219 14.277 - 14.375: 95.9529% ( 16) 00:07:54.219 14.375 - 14.474: 96.0152% ( 10) 00:07:54.219 14.474 - 14.572: 96.1459% ( 21) 00:07:54.219 14.572 - 14.671: 96.2767% ( 21) 00:07:54.219 14.671 - 14.769: 96.4012% ( 20) 00:07:54.219 14.769 - 14.868: 96.5320% ( 21) 00:07:54.219 14.868 - 14.966: 96.6939% ( 26) 00:07:54.219 14.966 - 15.065: 96.8682% ( 28) 00:07:54.219 15.065 - 15.163: 97.0425% ( 28) 00:07:54.219 15.163 - 15.262: 97.1920% ( 24) 00:07:54.219 15.262 - 15.360: 97.3289% ( 22) 00:07:54.219 15.360 - 15.458: 97.3974% ( 11) 00:07:54.219 15.458 - 15.557: 97.5095% ( 18) 00:07:54.219 15.557 - 15.655: 97.5593% ( 8) 00:07:54.219 15.655 - 15.754: 97.5967% ( 6) 00:07:54.219 15.754 - 15.852: 97.6153% ( 3) 00:07:54.219 15.852 - 15.951: 97.6527% ( 6) 00:07:54.219 15.951 - 16.049: 97.6776% ( 4) 00:07:54.219 16.049 - 16.148: 97.7087% ( 5) 00:07:54.219 16.148 - 16.246: 97.7150% ( 1) 00:07:54.219 16.246 - 16.345: 97.7585% ( 7) 00:07:54.219 16.345 - 16.443: 97.7710% ( 2) 00:07:54.219 16.443 - 16.542: 97.7959% ( 4) 00:07:54.219 16.542 - 16.640: 97.8395% ( 7) 00:07:54.219 16.640 - 16.738: 97.9951% ( 25) 00:07:54.219 16.738 - 16.837: 98.1321% ( 22) 00:07:54.219 16.837 - 16.935: 98.2068% ( 12) 00:07:54.219 16.935 - 17.034: 98.3251% ( 19) 00:07:54.219 17.034 - 17.132: 98.3936% ( 11) 00:07:54.219 17.132 - 17.231: 98.4621% ( 11) 00:07:54.219 17.231 - 17.329: 98.5306% ( 11) 00:07:54.219 17.329 - 17.428: 98.5680% ( 6) 00:07:54.219 17.428 - 17.526: 98.5991% ( 5) 00:07:54.219 17.526 - 17.625: 98.6489% ( 8) 00:07:54.219 17.625 - 17.723: 98.6676% ( 3) 00:07:54.219 17.723 - 17.822: 98.7049% ( 6) 00:07:54.219 17.822 - 17.920: 98.7734% ( 11) 00:07:54.219 17.920 - 18.018: 98.8046% ( 5) 00:07:54.219 18.018 - 18.117: 98.8170% ( 2) 00:07:54.219 18.117 - 18.215: 98.8295% ( 2) 00:07:54.219 18.215 - 18.314: 98.8481% ( 3) 00:07:54.219 18.314 - 18.412: 98.8668% ( 3) 00:07:54.219 18.412 - 18.511: 98.8917% ( 4) 00:07:54.219 18.511 - 18.609: 98.9166% ( 4) 00:07:54.219 18.609 - 18.708: 98.9415% ( 4) 00:07:54.219 18.708 - 18.806: 98.9727% ( 5) 00:07:54.219 18.905 - 19.003: 98.9789% ( 1) 00:07:54.219 19.298 - 19.397: 98.9851% ( 1) 00:07:54.219 19.495 - 19.594: 99.0038% ( 3) 00:07:54.219 19.692 - 19.791: 99.0225% ( 3) 00:07:54.219 19.791 - 19.889: 99.0287% ( 1) 00:07:54.219 19.889 - 19.988: 99.0349% ( 1) 00:07:54.219 20.185 - 20.283: 99.0412% ( 1) 00:07:54.219 20.283 - 20.382: 99.0598% ( 3) 00:07:54.219 20.578 - 20.677: 99.0661% ( 1) 00:07:54.219 21.071 - 21.169: 99.0723% ( 1) 00:07:54.219 21.268 - 21.366: 99.0785% ( 1) 00:07:54.219 21.465 - 21.563: 99.0847% ( 1) 00:07:54.219 21.957 - 22.055: 99.1034% ( 3) 00:07:54.219 22.055 - 22.154: 99.1159% ( 2) 00:07:54.219 22.252 - 22.351: 99.1283% ( 2) 00:07:54.219 22.449 - 22.548: 99.1345% ( 1) 00:07:54.219 22.646 - 22.745: 99.1408% ( 1) 00:07:54.219 22.843 - 22.942: 99.1470% ( 1) 00:07:54.219 23.237 - 23.335: 99.1532% ( 1) 00:07:54.219 24.025 - 24.123: 99.1595% ( 1) 00:07:54.219 24.123 - 24.222: 99.1657% ( 1) 00:07:54.219 24.222 - 24.320: 99.1719% ( 1) 00:07:54.219 24.517 - 24.615: 99.1781% ( 1) 00:07:54.219 24.812 - 24.911: 99.1844% ( 1) 00:07:54.219 24.911 - 25.009: 99.1906% ( 1) 00:07:54.219 27.175 - 27.372: 99.1968% ( 1) 00:07:54.219 27.569 - 27.766: 99.2030% ( 1) 00:07:54.219 27.766 - 27.963: 99.2279% ( 4) 00:07:54.219 27.963 - 28.160: 99.2466% ( 3) 00:07:54.219 28.160 - 28.357: 99.2591% ( 2) 00:07:54.219 28.751 - 28.948: 99.2715% ( 2) 00:07:54.219 29.932 - 30.129: 99.2778% ( 1) 00:07:54.219 30.720 - 30.917: 99.3089% ( 5) 00:07:54.219 30.917 - 31.114: 99.4957% ( 30) 00:07:54.219 31.114 - 31.311: 99.6887% ( 31) 00:07:54.219 31.311 - 31.508: 99.7509% ( 10) 00:07:54.219 31.508 - 31.705: 99.7883% ( 6) 00:07:54.219 31.705 - 31.902: 99.8194% ( 5) 00:07:54.219 31.902 - 32.098: 99.8381% ( 3) 00:07:54.219 32.098 - 32.295: 99.8568% ( 3) 00:07:54.219 32.295 - 32.492: 99.8755% ( 3) 00:07:54.219 34.658 - 34.855: 99.8817% ( 1) 00:07:54.219 35.643 - 35.840: 99.8879% ( 1) 00:07:54.219 38.794 - 38.991: 99.8942% ( 1) 00:07:54.219 39.582 - 39.778: 99.9004% ( 1) 00:07:54.219 39.975 - 40.172: 99.9066% ( 1) 00:07:54.219 40.172 - 40.369: 99.9191% ( 2) 00:07:54.219 41.748 - 41.945: 99.9253% ( 1) 00:07:54.219 41.945 - 42.142: 99.9315% ( 1) 00:07:54.219 44.505 - 44.702: 99.9377% ( 1) 00:07:54.220 46.474 - 46.671: 99.9440% ( 1) 00:07:54.220 48.049 - 48.246: 99.9502% ( 1) 00:07:54.220 50.215 - 50.412: 99.9564% ( 1) 00:07:54.220 51.594 - 51.988: 99.9626% ( 1) 00:07:54.220 55.926 - 56.320: 99.9689% ( 1) 00:07:54.220 56.714 - 57.108: 99.9751% ( 1) 00:07:54.220 65.378 - 65.772: 99.9813% ( 1) 00:07:54.220 77.982 - 78.375: 99.9875% ( 1) 00:07:54.220 79.163 - 79.557: 99.9938% ( 1) 00:07:54.220 222.129 - 223.705: 100.0000% ( 1) 00:07:54.220 00:07:54.220 Complete histogram 00:07:54.220 ================== 00:07:54.220 Range in us Cumulative Count 00:07:54.220 7.286 - 7.335: 0.3051% ( 49) 00:07:54.220 7.335 - 7.385: 4.1405% ( 616) 00:07:54.220 7.385 - 7.434: 19.1769% ( 2415) 00:07:54.220 7.434 - 7.483: 42.1269% ( 3686) 00:07:54.220 7.483 - 7.532: 63.9313% ( 3502) 00:07:54.220 7.532 - 7.582: 78.7124% ( 2374) 00:07:54.220 7.582 - 7.631: 86.7941% ( 1298) 00:07:54.220 7.631 - 7.680: 90.5423% ( 602) 00:07:54.220 7.680 - 7.729: 92.4538% ( 307) 00:07:54.220 7.729 - 7.778: 93.4500% ( 160) 00:07:54.220 7.778 - 7.828: 93.9232% ( 76) 00:07:54.220 7.828 - 7.877: 94.1349% ( 34) 00:07:54.220 7.877 - 7.926: 94.2220% ( 14) 00:07:54.220 7.926 - 7.975: 94.3341% ( 18) 00:07:54.220 7.975 - 8.025: 94.3777% ( 7) 00:07:54.220 8.025 - 8.074: 94.5022% ( 20) 00:07:54.220 8.074 - 8.123: 94.6392% ( 22) 00:07:54.220 8.123 - 8.172: 94.8322% ( 31) 00:07:54.220 8.172 - 8.222: 95.0688% ( 38) 00:07:54.220 8.222 - 8.271: 95.2369% ( 27) 00:07:54.220 8.271 - 8.320: 95.3988% ( 26) 00:07:54.220 8.320 - 8.369: 95.5794% ( 29) 00:07:54.220 8.369 - 8.418: 95.7537% ( 28) 00:07:54.220 8.418 - 8.468: 95.8720% ( 19) 00:07:54.220 8.468 - 8.517: 95.9592% ( 14) 00:07:54.220 8.517 - 8.566: 96.0090% ( 8) 00:07:54.220 8.566 - 8.615: 96.0525% ( 7) 00:07:54.220 8.615 - 8.665: 96.0712% ( 3) 00:07:54.220 8.665 - 8.714: 96.0837% ( 2) 00:07:54.220 8.714 - 8.763: 96.0961% ( 2) 00:07:54.220 8.763 - 8.812: 96.1024% ( 1) 00:07:54.220 8.812 - 8.862: 96.1086% ( 1) 00:07:54.220 8.960 - 9.009: 96.1148% ( 1) 00:07:54.220 9.009 - 9.058: 96.1273% ( 2) 00:07:54.220 9.157 - 9.206: 96.1335% ( 1) 00:07:54.220 9.206 - 9.255: 96.1522% ( 3) 00:07:54.220 9.502 - 9.551: 96.1584% ( 1) 00:07:54.220 9.600 - 9.649: 96.1646% ( 1) 00:07:54.220 9.649 - 9.698: 96.1895% ( 4) 00:07:54.220 9.698 - 9.748: 96.1958% ( 1) 00:07:54.220 9.748 - 9.797: 96.2518% ( 9) 00:07:54.220 9.797 - 9.846: 96.3078% ( 9) 00:07:54.220 9.846 - 9.895: 96.3265% ( 3) 00:07:54.220 9.895 - 9.945: 96.4012% ( 12) 00:07:54.220 9.945 - 9.994: 96.4635% ( 10) 00:07:54.220 9.994 - 10.043: 96.5444% ( 13) 00:07:54.220 10.043 - 10.092: 96.6191% ( 12) 00:07:54.220 10.092 - 10.142: 96.7499% ( 21) 00:07:54.220 10.142 - 10.191: 96.8495% ( 16) 00:07:54.220 10.191 - 10.240: 96.9367% ( 14) 00:07:54.220 10.240 - 10.289: 97.0238% ( 14) 00:07:54.220 10.289 - 10.338: 97.1359% ( 18) 00:07:54.220 10.338 - 10.388: 97.2231% ( 14) 00:07:54.220 10.388 - 10.437: 97.3165% ( 15) 00:07:54.220 10.437 - 10.486: 97.3850% ( 11) 00:07:54.220 10.486 - 10.535: 97.4472% ( 10) 00:07:54.220 10.535 - 10.585: 97.5219% ( 12) 00:07:54.220 10.585 - 10.634: 97.5967% ( 12) 00:07:54.220 10.634 - 10.683: 97.6402% ( 7) 00:07:54.220 10.683 - 10.732: 97.6838% ( 7) 00:07:54.220 10.732 - 10.782: 97.7212% ( 6) 00:07:54.220 10.782 - 10.831: 97.7399% ( 3) 00:07:54.220 10.831 - 10.880: 97.7585% ( 3) 00:07:54.220 10.880 - 10.929: 97.7835% ( 4) 00:07:54.220 10.978 - 11.028: 97.8084% ( 4) 00:07:54.220 11.028 - 11.077: 97.8208% ( 2) 00:07:54.220 11.225 - 11.274: 97.8270% ( 1) 00:07:54.220 11.274 - 11.323: 97.8519% ( 4) 00:07:54.220 11.323 - 11.372: 97.8582% ( 1) 00:07:54.220 11.471 - 11.520: 97.8644% ( 1) 00:07:54.220 11.766 - 11.815: 97.8706% ( 1) 00:07:54.220 12.702 - 12.800: 97.8831% ( 2) 00:07:54.220 12.800 - 12.898: 97.9267% ( 7) 00:07:54.220 12.898 - 12.997: 97.9951% ( 11) 00:07:54.220 12.997 - 13.095: 98.0574% ( 10) 00:07:54.220 13.095 - 13.194: 98.1508% ( 15) 00:07:54.220 13.194 - 13.292: 98.2691% ( 19) 00:07:54.220 13.292 - 13.391: 98.3189% ( 8) 00:07:54.220 13.391 - 13.489: 98.3563% ( 6) 00:07:54.220 13.489 - 13.588: 98.4434% ( 14) 00:07:54.220 13.588 - 13.686: 98.4870% ( 7) 00:07:54.220 13.686 - 13.785: 98.5306% ( 7) 00:07:54.220 13.785 - 13.883: 98.5617% ( 5) 00:07:54.220 13.883 - 13.982: 98.6364% ( 12) 00:07:54.220 13.982 - 14.080: 98.7547% ( 19) 00:07:54.220 14.080 - 14.178: 98.7797% ( 4) 00:07:54.220 14.178 - 14.277: 98.8108% ( 5) 00:07:54.220 14.277 - 14.375: 98.8357% ( 4) 00:07:54.220 14.375 - 14.474: 98.8793% ( 7) 00:07:54.220 14.474 - 14.572: 98.9229% ( 7) 00:07:54.220 14.572 - 14.671: 98.9353% ( 2) 00:07:54.220 14.671 - 14.769: 98.9602% ( 4) 00:07:54.220 14.868 - 14.966: 98.9789% ( 3) 00:07:54.220 14.966 - 15.065: 99.0038% ( 4) 00:07:54.220 15.065 - 15.163: 99.0100% ( 1) 00:07:54.220 15.163 - 15.262: 99.0163% ( 1) 00:07:54.220 15.262 - 15.360: 99.0225% ( 1) 00:07:54.220 15.458 - 15.557: 99.0287% ( 1) 00:07:54.220 15.754 - 15.852: 99.0349% ( 1) 00:07:54.220 16.049 - 16.148: 99.0412% ( 1) 00:07:54.220 16.148 - 16.246: 99.0474% ( 1) 00:07:54.220 16.246 - 16.345: 99.0598% ( 2) 00:07:54.220 16.345 - 16.443: 99.0661% ( 1) 00:07:54.220 16.443 - 16.542: 99.0785% ( 2) 00:07:54.220 16.640 - 16.738: 99.0847% ( 1) 00:07:54.220 16.837 - 16.935: 99.0972% ( 2) 00:07:54.220 17.132 - 17.231: 99.1096% ( 2) 00:07:54.220 17.231 - 17.329: 99.1159% ( 1) 00:07:54.220 17.428 - 17.526: 99.1221% ( 1) 00:07:54.220 17.625 - 17.723: 99.1283% ( 1) 00:07:54.220 17.723 - 17.822: 99.1345% ( 1) 00:07:54.220 17.822 - 17.920: 99.1408% ( 1) 00:07:54.220 17.920 - 18.018: 99.1470% ( 1) 00:07:54.220 18.018 - 18.117: 99.1532% ( 1) 00:07:54.220 18.708 - 18.806: 99.1595% ( 1) 00:07:54.220 19.594 - 19.692: 99.1657% ( 1) 00:07:54.220 19.791 - 19.889: 99.1781% ( 2) 00:07:54.220 19.889 - 19.988: 99.1906% ( 2) 00:07:54.220 19.988 - 20.086: 99.2093% ( 3) 00:07:54.220 20.086 - 20.185: 99.2155% ( 1) 00:07:54.220 20.185 - 20.283: 99.2217% ( 1) 00:07:54.220 20.283 - 20.382: 99.2279% ( 1) 00:07:54.220 20.382 - 20.480: 99.2404% ( 2) 00:07:54.220 21.662 - 21.760: 99.2466% ( 1) 00:07:54.220 22.055 - 22.154: 99.2902% ( 7) 00:07:54.220 22.154 - 22.252: 99.3774% ( 14) 00:07:54.220 22.252 - 22.351: 99.5081% ( 21) 00:07:54.220 22.351 - 22.449: 99.6389% ( 21) 00:07:54.220 22.449 - 22.548: 99.7198% ( 13) 00:07:54.220 22.548 - 22.646: 99.7821% ( 10) 00:07:54.220 22.745 - 22.843: 99.7945% ( 2) 00:07:54.220 22.843 - 22.942: 99.8070% ( 2) 00:07:54.220 23.040 - 23.138: 99.8132% ( 1) 00:07:54.220 23.335 - 23.434: 99.8194% ( 1) 00:07:54.220 23.434 - 23.532: 99.8257% ( 1) 00:07:54.220 23.532 - 23.631: 99.8319% ( 1) 00:07:54.220 25.206 - 25.403: 99.8381% ( 1) 00:07:54.220 25.797 - 25.994: 99.8506% ( 2) 00:07:54.220 27.569 - 27.766: 99.8568% ( 1) 00:07:54.220 27.766 - 27.963: 99.8630% ( 1) 00:07:54.220 27.963 - 28.160: 99.8692% ( 1) 00:07:54.220 28.357 - 28.554: 99.8755% ( 1) 00:07:54.220 28.751 - 28.948: 99.8817% ( 1) 00:07:54.220 29.342 - 29.538: 99.8879% ( 1) 00:07:54.220 29.735 - 29.932: 99.8942% ( 1) 00:07:54.220 30.129 - 30.326: 99.9004% ( 1) 00:07:54.220 31.705 - 31.902: 99.9066% ( 1) 00:07:54.220 32.886 - 33.083: 99.9128% ( 1) 00:07:54.220 34.462 - 34.658: 99.9191% ( 1) 00:07:54.220 39.188 - 39.385: 99.9253% ( 1) 00:07:54.220 39.975 - 40.172: 99.9315% ( 1) 00:07:54.220 40.369 - 40.566: 99.9377% ( 1) 00:07:54.220 41.551 - 41.748: 99.9440% ( 1) 00:07:54.220 42.338 - 42.535: 99.9564% ( 2) 00:07:54.220 42.535 - 42.732: 99.9626% ( 1) 00:07:54.220 45.095 - 45.292: 99.9689% ( 1) 00:07:54.220 46.474 - 46.671: 99.9751% ( 1) 00:07:54.220 51.200 - 51.594: 99.9813% ( 1) 00:07:54.220 52.382 - 52.775: 99.9875% ( 1) 00:07:54.220 61.834 - 62.228: 99.9938% ( 1) 00:07:54.220 270.966 - 272.542: 100.0000% ( 1) 00:07:54.220 00:07:54.220 00:07:54.220 real 0m1.183s 00:07:54.220 user 0m1.058s 00:07:54.220 sys 0m0.076s 00:07:54.220 ************************************ 00:07:54.220 END TEST nvme_overhead 00:07:54.220 ************************************ 00:07:54.220 21:13:43 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.220 21:13:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:54.220 21:13:43 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:54.220 21:13:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:54.220 21:13:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.220 21:13:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.220 ************************************ 00:07:54.220 START TEST nvme_arbitration 00:07:54.220 ************************************ 00:07:54.220 21:13:43 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:57.511 Initializing NVMe Controllers 00:07:57.511 Attached to 0000:00:10.0 00:07:57.511 Attached to 0000:00:11.0 00:07:57.511 Attached to 0000:00:13.0 00:07:57.511 Attached to 0000:00:12.0 00:07:57.511 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:57.511 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:57.511 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:57.511 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:57.511 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:57.511 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:57.511 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:57.511 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:57.511 Initialization complete. Launching workers. 00:07:57.511 Starting thread on core 1 with urgent priority queue 00:07:57.511 Starting thread on core 2 with urgent priority queue 00:07:57.511 Starting thread on core 3 with urgent priority queue 00:07:57.511 Starting thread on core 0 with urgent priority queue 00:07:57.511 QEMU NVMe Ctrl (12340 ) core 0: 6933.33 IO/s 14.42 secs/100000 ios 00:07:57.511 QEMU NVMe Ctrl (12342 ) core 0: 6933.33 IO/s 14.42 secs/100000 ios 00:07:57.511 QEMU NVMe Ctrl (12341 ) core 1: 6976.00 IO/s 14.33 secs/100000 ios 00:07:57.511 QEMU NVMe Ctrl (12342 ) core 1: 6976.00 IO/s 14.33 secs/100000 ios 00:07:57.511 QEMU NVMe Ctrl (12343 ) core 2: 6464.00 IO/s 15.47 secs/100000 ios 00:07:57.511 QEMU NVMe Ctrl (12342 ) core 3: 6464.00 IO/s 15.47 secs/100000 ios 00:07:57.511 ======================================================== 00:07:57.511 00:07:57.511 00:07:57.511 real 0m3.208s 00:07:57.511 user 0m9.010s 00:07:57.511 sys 0m0.100s 00:07:57.511 ************************************ 00:07:57.511 END TEST nvme_arbitration 00:07:57.511 ************************************ 00:07:57.511 ************************************ 00:07:57.511 START TEST nvme_single_aen 00:07:57.511 ************************************ 00:07:57.511 21:13:46 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.511 21:13:46 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:57.511 21:13:46 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:57.511 21:13:46 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.511 21:13:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.511 21:13:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.511 21:13:46 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:57.511 Asynchronous Event Request test 00:07:57.511 Attached to 0000:00:10.0 00:07:57.511 Attached to 0000:00:11.0 00:07:57.511 Attached to 0000:00:13.0 00:07:57.512 Attached to 0000:00:12.0 00:07:57.512 Reset controller to setup AER completions for this process 00:07:57.512 Registering asynchronous event callbacks... 00:07:57.512 Getting orig temperature thresholds of all controllers 00:07:57.512 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.512 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.512 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.512 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:57.512 Setting all controllers temperature threshold low to trigger AER 00:07:57.512 Waiting for all controllers temperature threshold to be set lower 00:07:57.512 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.512 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:57.512 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.512 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:57.512 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.512 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:57.512 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:57.512 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:57.512 Waiting for all controllers to trigger AER and reset threshold 00:07:57.512 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.512 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.512 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.512 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.512 Cleaning up... 00:07:57.512 00:07:57.512 real 0m0.186s 00:07:57.512 user 0m0.068s 00:07:57.512 sys 0m0.072s 00:07:57.512 ************************************ 00:07:57.512 END TEST nvme_single_aen 00:07:57.512 ************************************ 00:07:57.512 21:13:47 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.512 21:13:47 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:57.512 21:13:47 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:57.512 21:13:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.512 21:13:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.512 21:13:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.512 ************************************ 00:07:57.512 START TEST nvme_doorbell_aers 00:07:57.512 ************************************ 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:57.512 21:13:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:57.773 [2024-12-16 21:13:47.317348] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:07.768 Executing: test_write_invalid_db 00:08:07.768 Waiting for AER completion... 00:08:07.768 Failure: test_write_invalid_db 00:08:07.768 00:08:07.768 Executing: test_invalid_db_write_overflow_sq 00:08:07.768 Waiting for AER completion... 00:08:07.768 Failure: test_invalid_db_write_overflow_sq 00:08:07.768 00:08:07.768 Executing: test_invalid_db_write_overflow_cq 00:08:07.768 Waiting for AER completion... 00:08:07.768 Failure: test_invalid_db_write_overflow_cq 00:08:07.768 00:08:07.768 21:13:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:07.768 21:13:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:07.768 [2024-12-16 21:13:57.339575] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:17.758 Executing: test_write_invalid_db 00:08:17.758 Waiting for AER completion... 00:08:17.758 Failure: test_write_invalid_db 00:08:17.758 00:08:17.758 Executing: test_invalid_db_write_overflow_sq 00:08:17.758 Waiting for AER completion... 00:08:17.758 Failure: test_invalid_db_write_overflow_sq 00:08:17.758 00:08:17.758 Executing: test_invalid_db_write_overflow_cq 00:08:17.758 Waiting for AER completion... 00:08:17.758 Failure: test_invalid_db_write_overflow_cq 00:08:17.758 00:08:17.758 21:14:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.758 21:14:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:17.758 [2024-12-16 21:14:07.368175] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:27.727 Executing: test_write_invalid_db 00:08:27.727 Waiting for AER completion... 00:08:27.727 Failure: test_write_invalid_db 00:08:27.727 00:08:27.727 Executing: test_invalid_db_write_overflow_sq 00:08:27.727 Waiting for AER completion... 00:08:27.727 Failure: test_invalid_db_write_overflow_sq 00:08:27.727 00:08:27.727 Executing: test_invalid_db_write_overflow_cq 00:08:27.727 Waiting for AER completion... 00:08:27.727 Failure: test_invalid_db_write_overflow_cq 00:08:27.727 00:08:27.727 21:14:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:27.727 21:14:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:27.728 [2024-12-16 21:14:17.395840] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.708 Executing: test_write_invalid_db 00:08:37.708 Waiting for AER completion... 00:08:37.708 Failure: test_write_invalid_db 00:08:37.708 00:08:37.708 Executing: test_invalid_db_write_overflow_sq 00:08:37.708 Waiting for AER completion... 00:08:37.708 Failure: test_invalid_db_write_overflow_sq 00:08:37.708 00:08:37.708 Executing: test_invalid_db_write_overflow_cq 00:08:37.708 Waiting for AER completion... 00:08:37.708 Failure: test_invalid_db_write_overflow_cq 00:08:37.708 00:08:37.708 00:08:37.708 real 0m40.170s 00:08:37.708 user 0m34.193s 00:08:37.708 sys 0m5.618s 00:08:37.708 21:14:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.708 21:14:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:37.708 ************************************ 00:08:37.709 END TEST nvme_doorbell_aers 00:08:37.709 ************************************ 00:08:37.709 21:14:27 nvme -- nvme/nvme.sh@97 -- # uname 00:08:37.709 21:14:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:37.709 21:14:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:37.709 21:14:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:37.709 21:14:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.709 21:14:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.709 ************************************ 00:08:37.709 START TEST nvme_multi_aen 00:08:37.709 ************************************ 00:08:37.709 21:14:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:37.967 [2024-12-16 21:14:27.446785] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.446929] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.446984] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.448079] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.448176] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.448224] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.449208] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.449295] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.449342] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.450290] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.450375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 [2024-12-16 21:14:27.450385] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76548) is not found. Dropping the request. 00:08:37.967 Child process pid: 77074 00:08:37.967 [Child] Asynchronous Event Request test 00:08:37.967 [Child] Attached to 0000:00:10.0 00:08:37.967 [Child] Attached to 0000:00:11.0 00:08:37.967 [Child] Attached to 0000:00:13.0 00:08:37.967 [Child] Attached to 0000:00:12.0 00:08:37.967 [Child] Registering asynchronous event callbacks... 00:08:37.967 [Child] Getting orig temperature thresholds of all controllers 00:08:37.967 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.967 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.967 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.967 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.967 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:37.967 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.967 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.967 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.967 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.967 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.967 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.967 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.967 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.967 [Child] Cleaning up... 00:08:38.225 Asynchronous Event Request test 00:08:38.225 Attached to 0000:00:10.0 00:08:38.225 Attached to 0000:00:11.0 00:08:38.225 Attached to 0000:00:13.0 00:08:38.225 Attached to 0000:00:12.0 00:08:38.225 Reset controller to setup AER completions for this process 00:08:38.225 Registering asynchronous event callbacks... 00:08:38.225 Getting orig temperature thresholds of all controllers 00:08:38.225 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.225 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.225 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.225 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.225 Setting all controllers temperature threshold low to trigger AER 00:08:38.225 Waiting for all controllers temperature threshold to be set lower 00:08:38.225 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.225 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:38.225 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.225 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:38.225 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.225 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:38.225 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.225 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:38.225 Waiting for all controllers to trigger AER and reset threshold 00:08:38.225 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.225 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.225 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.225 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.225 Cleaning up... 00:08:38.225 00:08:38.225 real 0m0.375s 00:08:38.225 user 0m0.121s 00:08:38.225 sys 0m0.139s 00:08:38.225 21:14:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.225 21:14:27 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 ************************************ 00:08:38.225 END TEST nvme_multi_aen 00:08:38.225 ************************************ 00:08:38.225 21:14:27 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:38.225 21:14:27 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:38.225 21:14:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.226 21:14:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.226 ************************************ 00:08:38.226 START TEST nvme_startup 00:08:38.226 ************************************ 00:08:38.226 21:14:27 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:38.226 Initializing NVMe Controllers 00:08:38.226 Attached to 0000:00:10.0 00:08:38.226 Attached to 0000:00:11.0 00:08:38.226 Attached to 0000:00:13.0 00:08:38.226 Attached to 0000:00:12.0 00:08:38.226 Initialization complete. 00:08:38.226 Time used:125011.453 (us). 00:08:38.226 00:08:38.226 real 0m0.169s 00:08:38.226 user 0m0.059s 00:08:38.226 sys 0m0.065s 00:08:38.226 21:14:27 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.226 21:14:27 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:38.226 ************************************ 00:08:38.226 END TEST nvme_startup 00:08:38.226 ************************************ 00:08:38.226 21:14:27 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:38.226 21:14:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.226 21:14:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.226 21:14:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.483 ************************************ 00:08:38.483 START TEST nvme_multi_secondary 00:08:38.483 ************************************ 00:08:38.483 21:14:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:38.483 21:14:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77119 00:08:38.483 21:14:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77120 00:08:38.483 21:14:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:38.483 21:14:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:38.483 21:14:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:41.769 Initializing NVMe Controllers 00:08:41.769 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.769 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.769 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.769 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.769 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:41.769 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:41.769 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:41.769 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:41.769 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:41.769 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:41.769 Initialization complete. Launching workers. 00:08:41.769 ======================================================== 00:08:41.769 Latency(us) 00:08:41.769 Device Information : IOPS MiB/s Average min max 00:08:41.769 PCIE (0000:00:10.0) NSID 1 from core 2: 2035.21 7.95 7860.10 1345.55 35992.78 00:08:41.769 PCIE (0000:00:11.0) NSID 1 from core 2: 2035.21 7.95 7861.00 1313.06 33316.25 00:08:41.769 PCIE (0000:00:13.0) NSID 1 from core 2: 2035.21 7.95 7864.36 1408.13 34225.63 00:08:41.769 PCIE (0000:00:12.0) NSID 1 from core 2: 2035.21 7.95 7864.25 1418.68 29095.93 00:08:41.769 PCIE (0000:00:12.0) NSID 2 from core 2: 2035.21 7.95 7880.65 1326.97 32542.20 00:08:41.769 PCIE (0000:00:12.0) NSID 3 from core 2: 2035.21 7.95 7886.49 1409.40 28601.26 00:08:41.769 ======================================================== 00:08:41.769 Total : 12211.28 47.70 7869.48 1313.06 35992.78 00:08:41.769 00:08:41.769 Initializing NVMe Controllers 00:08:41.769 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.769 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.769 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.769 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.769 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:41.769 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:41.769 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:41.769 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:41.769 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:41.769 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:41.769 Initialization complete. Launching workers. 00:08:41.769 ======================================================== 00:08:41.769 Latency(us) 00:08:41.770 Device Information : IOPS MiB/s Average min max 00:08:41.770 PCIE (0000:00:10.0) NSID 1 from core 1: 5065.23 19.79 3157.09 990.41 12862.51 00:08:41.770 PCIE (0000:00:11.0) NSID 1 from core 1: 5065.23 19.79 3158.41 943.45 11972.91 00:08:41.770 PCIE (0000:00:13.0) NSID 1 from core 1: 5065.23 19.79 3158.45 974.13 12113.81 00:08:41.770 PCIE (0000:00:12.0) NSID 1 from core 1: 5065.23 19.79 3158.49 1018.19 13053.39 00:08:41.770 PCIE (0000:00:12.0) NSID 2 from core 1: 5065.23 19.79 3158.50 1025.49 13566.24 00:08:41.770 PCIE (0000:00:12.0) NSID 3 from core 1: 5065.23 19.79 3158.53 1019.62 13835.22 00:08:41.770 ======================================================== 00:08:41.770 Total : 30391.41 118.72 3158.25 943.45 13835.22 00:08:41.770 00:08:41.770 21:14:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77119 00:08:43.682 Initializing NVMe Controllers 00:08:43.682 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.682 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.682 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.682 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.682 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.682 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.682 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.682 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.682 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.682 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.682 Initialization complete. Launching workers. 00:08:43.682 ======================================================== 00:08:43.682 Latency(us) 00:08:43.682 Device Information : IOPS MiB/s Average min max 00:08:43.682 PCIE (0000:00:10.0) NSID 1 from core 0: 6245.99 24.40 2560.17 670.73 12907.25 00:08:43.682 PCIE (0000:00:11.0) NSID 1 from core 0: 6245.99 24.40 2561.48 687.52 13422.48 00:08:43.682 PCIE (0000:00:13.0) NSID 1 from core 0: 6245.99 24.40 2561.47 702.47 13878.72 00:08:43.682 PCIE (0000:00:12.0) NSID 1 from core 0: 6245.99 24.40 2561.45 690.91 13705.48 00:08:43.682 PCIE (0000:00:12.0) NSID 2 from core 0: 6245.99 24.40 2561.43 697.19 13292.72 00:08:43.682 PCIE (0000:00:12.0) NSID 3 from core 0: 6245.99 24.40 2561.41 698.78 13922.17 00:08:43.682 ======================================================== 00:08:43.682 Total : 37475.94 146.39 2561.24 670.73 13922.17 00:08:43.682 00:08:43.682 21:14:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77120 00:08:43.682 21:14:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:43.682 21:14:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77189 00:08:43.682 21:14:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77190 00:08:43.682 21:14:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:43.682 21:14:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:46.985 Initializing NVMe Controllers 00:08:46.985 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.985 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.985 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.985 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.985 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:46.985 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:46.985 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:46.985 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:46.985 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:46.985 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:46.985 Initialization complete. Launching workers. 00:08:46.985 ======================================================== 00:08:46.985 Latency(us) 00:08:46.985 Device Information : IOPS MiB/s Average min max 00:08:46.985 PCIE (0000:00:10.0) NSID 1 from core 1: 3261.44 12.74 4903.71 800.62 12566.02 00:08:46.985 PCIE (0000:00:11.0) NSID 1 from core 1: 3261.44 12.74 4905.90 893.79 12883.18 00:08:46.985 PCIE (0000:00:13.0) NSID 1 from core 1: 3261.44 12.74 4906.34 971.60 13815.44 00:08:46.985 PCIE (0000:00:12.0) NSID 1 from core 1: 3261.44 12.74 4907.31 967.41 13627.19 00:08:46.985 PCIE (0000:00:12.0) NSID 2 from core 1: 3261.44 12.74 4907.18 884.40 13669.41 00:08:46.985 PCIE (0000:00:12.0) NSID 3 from core 1: 3266.77 12.76 4899.05 870.93 12851.37 00:08:46.985 ======================================================== 00:08:46.985 Total : 19573.99 76.46 4904.91 800.62 13815.44 00:08:46.985 00:08:46.985 Initializing NVMe Controllers 00:08:46.985 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.985 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.985 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.985 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.985 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:46.985 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:46.985 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:46.985 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:46.985 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:46.985 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:46.985 Initialization complete. Launching workers. 00:08:46.985 ======================================================== 00:08:46.985 Latency(us) 00:08:46.985 Device Information : IOPS MiB/s Average min max 00:08:46.985 PCIE (0000:00:10.0) NSID 1 from core 0: 3246.07 12.68 4926.65 911.25 14293.40 00:08:46.985 PCIE (0000:00:11.0) NSID 1 from core 0: 3246.07 12.68 4927.73 938.39 14735.24 00:08:46.985 PCIE (0000:00:13.0) NSID 1 from core 0: 3246.07 12.68 4928.43 941.90 13120.37 00:08:46.985 PCIE (0000:00:12.0) NSID 1 from core 0: 3246.07 12.68 4928.26 941.19 13470.98 00:08:46.985 PCIE (0000:00:12.0) NSID 2 from core 0: 3246.07 12.68 4928.06 941.28 13735.63 00:08:46.985 PCIE (0000:00:12.0) NSID 3 from core 0: 3246.07 12.68 4927.85 940.98 14180.61 00:08:46.985 ======================================================== 00:08:46.985 Total : 19476.42 76.08 4927.83 911.25 14735.24 00:08:46.985 00:08:48.894 Initializing NVMe Controllers 00:08:48.894 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.894 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.894 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.894 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.894 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:48.894 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:48.894 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:48.894 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:48.894 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:48.894 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:48.894 Initialization complete. Launching workers. 00:08:48.894 ======================================================== 00:08:48.894 Latency(us) 00:08:48.894 Device Information : IOPS MiB/s Average min max 00:08:48.894 PCIE (0000:00:10.0) NSID 1 from core 2: 1519.79 5.94 10526.24 1779.88 49630.35 00:08:48.894 PCIE (0000:00:11.0) NSID 1 from core 2: 1522.99 5.95 10512.92 1021.91 42576.51 00:08:48.894 PCIE (0000:00:13.0) NSID 1 from core 2: 1522.99 5.95 10512.83 1182.68 42157.55 00:08:48.894 PCIE (0000:00:12.0) NSID 1 from core 2: 1522.99 5.95 10512.70 1086.63 41622.35 00:08:48.894 PCIE (0000:00:12.0) NSID 2 from core 2: 1522.99 5.95 10511.94 1037.48 43412.37 00:08:48.894 PCIE (0000:00:12.0) NSID 3 from core 2: 1522.99 5.95 10512.33 1465.64 50228.78 00:08:48.894 ======================================================== 00:08:48.894 Total : 9134.72 35.68 10514.82 1021.91 50228.78 00:08:48.894 00:08:48.894 ************************************ 00:08:48.894 END TEST nvme_multi_secondary 00:08:48.894 ************************************ 00:08:48.894 21:14:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77189 00:08:48.894 21:14:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77190 00:08:48.894 00:08:48.894 real 0m10.428s 00:08:48.894 user 0m18.201s 00:08:48.894 sys 0m0.562s 00:08:48.894 21:14:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.894 21:14:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:48.894 21:14:38 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:48.894 21:14:38 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76163 ]] 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@1094 -- # kill 76163 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@1095 -- # wait 76163 00:08:48.894 [2024-12-16 21:14:38.407382] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.407485] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.407512] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.407538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.408846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.408944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.408970] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.408994] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.410199] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.410297] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.410322] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.410349] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.411662] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.411750] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.411781] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 [2024-12-16 21:14:38.411804] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77073) is not found. Dropping the request. 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:48.894 21:14:38 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.894 21:14:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.894 ************************************ 00:08:48.894 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:48.894 ************************************ 00:08:48.894 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.894 * Looking for test storage... 00:08:48.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.894 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:48.894 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:48.894 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.155 --rc genhtml_branch_coverage=1 00:08:49.155 --rc genhtml_function_coverage=1 00:08:49.155 --rc genhtml_legend=1 00:08:49.155 --rc geninfo_all_blocks=1 00:08:49.155 --rc geninfo_unexecuted_blocks=1 00:08:49.155 00:08:49.155 ' 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.155 --rc genhtml_branch_coverage=1 00:08:49.155 --rc genhtml_function_coverage=1 00:08:49.155 --rc genhtml_legend=1 00:08:49.155 --rc geninfo_all_blocks=1 00:08:49.155 --rc geninfo_unexecuted_blocks=1 00:08:49.155 00:08:49.155 ' 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.155 --rc genhtml_branch_coverage=1 00:08:49.155 --rc genhtml_function_coverage=1 00:08:49.155 --rc genhtml_legend=1 00:08:49.155 --rc geninfo_all_blocks=1 00:08:49.155 --rc geninfo_unexecuted_blocks=1 00:08:49.155 00:08:49.155 ' 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.155 --rc genhtml_branch_coverage=1 00:08:49.155 --rc genhtml_function_coverage=1 00:08:49.155 --rc genhtml_legend=1 00:08:49.155 --rc geninfo_all_blocks=1 00:08:49.155 --rc geninfo_unexecuted_blocks=1 00:08:49.155 00:08:49.155 ' 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:49.155 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:49.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77358 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77358 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77358 ']' 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.156 21:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.156 [2024-12-16 21:14:38.765989] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:49.156 [2024-12-16 21:14:38.766270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77358 ] 00:08:49.416 [2024-12-16 21:14:38.920605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.416 [2024-12-16 21:14:38.942494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.416 [2024-12-16 21:14:38.942770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.416 [2024-12-16 21:14:38.943050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.416 [2024-12-16 21:14:38.943131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.985 nvme0n1 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_EyDXw.txt 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.985 true 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734383679 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77381 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:49.985 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:49.986 21:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.524 [2024-12-16 21:14:41.692546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:52.524 [2024-12-16 21:14:41.692954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:52.524 [2024-12-16 21:14:41.692986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:52.524 [2024-12-16 21:14:41.693002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:52.524 [2024-12-16 21:14:41.694673] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:52.524 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77381 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77381 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77381 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_EyDXw.txt 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:52.524 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_EyDXw.txt 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77358 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77358 ']' 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77358 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77358 00:08:52.525 killing process with pid 77358 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77358' 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77358 00:08:52.525 21:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77358 00:08:52.525 21:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:52.525 21:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:52.525 00:08:52.525 real 0m3.571s 00:08:52.525 user 0m12.806s 00:08:52.525 sys 0m0.438s 00:08:52.525 ************************************ 00:08:52.525 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:52.525 ************************************ 00:08:52.525 21:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.525 21:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.525 21:14:42 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:52.525 21:14:42 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:52.525 21:14:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.525 21:14:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.525 21:14:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.525 ************************************ 00:08:52.525 START TEST nvme_fio 00:08:52.525 ************************************ 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:52.525 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:52.525 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:52.834 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:52.834 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:53.119 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:53.119 21:14:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:53.119 21:14:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:53.119 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:53.119 fio-3.35 00:08:53.119 Starting 1 thread 00:08:58.410 00:08:58.410 test: (groupid=0, jobs=1): err= 0: pid=77506: Mon Dec 16 21:14:47 2024 00:08:58.410 read: IOPS=16.5k, BW=64.6MiB/s (67.7MB/s)(129MiB/2001msec) 00:08:58.410 slat (nsec): min=4218, max=74046, avg=5684.85, stdev=3120.90 00:08:58.410 clat (usec): min=254, max=11631, avg=3851.36, stdev=1322.94 00:08:58.410 lat (usec): min=259, max=11672, avg=3857.04, stdev=1323.94 00:08:58.410 clat percentiles (usec): 00:08:58.410 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:08:58.410 | 30.00th=[ 2868], 40.00th=[ 3064], 50.00th=[ 3359], 60.00th=[ 4015], 00:08:58.410 | 70.00th=[ 4621], 80.00th=[ 5145], 90.00th=[ 5735], 95.00th=[ 6194], 00:08:58.410 | 99.00th=[ 7177], 99.50th=[ 7767], 99.90th=[ 9634], 99.95th=[10814], 00:08:58.410 | 99.99th=[11469] 00:08:58.410 bw ( KiB/s): min=61656, max=67672, per=99.07%, avg=65496.00, stdev=3335.38, samples=3 00:08:58.410 iops : min=15414, max=16918, avg=16374.00, stdev=833.84, samples=3 00:08:58.410 write: IOPS=16.6k, BW=64.7MiB/s (67.8MB/s)(129MiB/2001msec); 0 zone resets 00:08:58.410 slat (nsec): min=4280, max=54908, avg=5875.99, stdev=3022.43 00:08:58.410 clat (usec): min=226, max=11544, avg=3863.45, stdev=1317.55 00:08:58.410 lat (usec): min=231, max=11563, avg=3869.32, stdev=1318.51 00:08:58.410 clat percentiles (usec): 00:08:58.410 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2704], 00:08:58.410 | 30.00th=[ 2868], 40.00th=[ 3064], 50.00th=[ 3359], 60.00th=[ 4015], 00:08:58.410 | 70.00th=[ 4621], 80.00th=[ 5145], 90.00th=[ 5735], 95.00th=[ 6259], 00:08:58.410 | 99.00th=[ 7177], 99.50th=[ 7635], 99.90th=[ 9634], 99.95th=[10945], 00:08:58.410 | 99.99th=[11469] 00:08:58.410 bw ( KiB/s): min=61960, max=67064, per=98.61%, avg=65306.67, stdev=2899.52, samples=3 00:08:58.410 iops : min=15490, max=16766, avg=16326.67, stdev=724.88, samples=3 00:08:58.410 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.02% 00:08:58.410 lat (msec) : 2=0.36%, 4=59.50%, 10=40.00%, 20=0.07% 00:08:58.410 cpu : usr=98.85%, sys=0.00%, ctx=4, majf=0, minf=625 00:08:58.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:58.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:58.410 issued rwts: total=33073,33131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:58.410 00:08:58.410 Run status group 0 (all jobs): 00:08:58.410 READ: bw=64.6MiB/s (67.7MB/s), 64.6MiB/s-64.6MiB/s (67.7MB/s-67.7MB/s), io=129MiB (135MB), run=2001-2001msec 00:08:58.410 WRITE: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=129MiB (136MB), run=2001-2001msec 00:08:58.410 ----------------------------------------------------- 00:08:58.410 Suppressions used: 00:08:58.410 count bytes template 00:08:58.410 1 32 /usr/src/fio/parse.c 00:08:58.410 1 8 libtcmalloc_minimal.so 00:08:58.410 ----------------------------------------------------- 00:08:58.410 00:08:58.410 21:14:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:58.410 21:14:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:58.410 21:14:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:58.410 21:14:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:58.672 21:14:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:58.672 21:14:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:58.672 21:14:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:58.672 21:14:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:58.672 21:14:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.933 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:58.933 fio-3.35 00:08:58.933 Starting 1 thread 00:09:04.219 00:09:04.219 test: (groupid=0, jobs=1): err= 0: pid=77561: Mon Dec 16 21:14:53 2024 00:09:04.219 read: IOPS=15.4k, BW=60.2MiB/s (63.1MB/s)(120MiB/2001msec) 00:09:04.219 slat (nsec): min=4221, max=81937, avg=5968.58, stdev=3455.69 00:09:04.219 clat (usec): min=227, max=13334, avg=4128.81, stdev=1444.87 00:09:04.219 lat (usec): min=232, max=13375, avg=4134.78, stdev=1446.14 00:09:04.219 clat percentiles (usec): 00:09:04.219 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:09:04.219 | 30.00th=[ 2999], 40.00th=[ 3294], 50.00th=[ 3851], 60.00th=[ 4490], 00:09:04.219 | 70.00th=[ 4948], 80.00th=[ 5407], 90.00th=[ 6063], 95.00th=[ 6652], 00:09:04.219 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9765], 99.95th=[10290], 00:09:04.219 | 99.99th=[13173] 00:09:04.219 bw ( KiB/s): min=58264, max=65792, per=99.24%, avg=61173.33, stdev=4044.63, samples=3 00:09:04.219 iops : min=14566, max=16448, avg=15293.33, stdev=1011.16, samples=3 00:09:04.219 write: IOPS=15.4k, BW=60.2MiB/s (63.2MB/s)(121MiB/2001msec); 0 zone resets 00:09:04.219 slat (nsec): min=4285, max=79063, avg=6231.50, stdev=3525.20 00:09:04.219 clat (usec): min=235, max=13258, avg=4148.28, stdev=1435.57 00:09:04.219 lat (usec): min=240, max=13277, avg=4154.51, stdev=1436.83 00:09:04.219 clat percentiles (usec): 00:09:04.219 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2802], 00:09:04.219 | 30.00th=[ 3032], 40.00th=[ 3326], 50.00th=[ 3884], 60.00th=[ 4490], 00:09:04.219 | 70.00th=[ 5014], 80.00th=[ 5473], 90.00th=[ 6063], 95.00th=[ 6652], 00:09:04.219 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[10159], 00:09:04.219 | 99.99th=[13173] 00:09:04.219 bw ( KiB/s): min=58304, max=65240, per=98.43%, avg=60722.67, stdev=3915.40, samples=3 00:09:04.219 iops : min=14576, max=16310, avg=15180.67, stdev=978.85, samples=3 00:09:04.219 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:09:04.219 lat (msec) : 2=0.25%, 4=51.51%, 10=48.12%, 20=0.07% 00:09:04.219 cpu : usr=98.55%, sys=0.20%, ctx=5, majf=0, minf=625 00:09:04.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:04.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.219 issued rwts: total=30837,30860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.219 00:09:04.219 Run status group 0 (all jobs): 00:09:04.219 READ: bw=60.2MiB/s (63.1MB/s), 60.2MiB/s-60.2MiB/s (63.1MB/s-63.1MB/s), io=120MiB (126MB), run=2001-2001msec 00:09:04.219 WRITE: bw=60.2MiB/s (63.2MB/s), 60.2MiB/s-60.2MiB/s (63.2MB/s-63.2MB/s), io=121MiB (126MB), run=2001-2001msec 00:09:04.219 ----------------------------------------------------- 00:09:04.219 Suppressions used: 00:09:04.219 count bytes template 00:09:04.219 1 32 /usr/src/fio/parse.c 00:09:04.219 1 8 libtcmalloc_minimal.so 00:09:04.219 ----------------------------------------------------- 00:09:04.219 00:09:04.219 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:04.219 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:04.219 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.219 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:04.219 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:04.219 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.219 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.220 21:14:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:04.220 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.480 21:14:53 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.480 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:04.480 fio-3.35 00:09:04.480 Starting 1 thread 00:09:09.767 00:09:09.768 test: (groupid=0, jobs=1): err= 0: pid=77616: Mon Dec 16 21:14:59 2024 00:09:09.768 read: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec) 00:09:09.768 slat (nsec): min=4205, max=72645, avg=5853.83, stdev=3333.45 00:09:09.768 clat (usec): min=478, max=11484, avg=3987.01, stdev=1385.71 00:09:09.768 lat (usec): min=496, max=11525, avg=3992.86, stdev=1386.89 00:09:09.768 clat percentiles (usec): 00:09:09.768 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2704], 00:09:09.768 | 30.00th=[ 2900], 40.00th=[ 3130], 50.00th=[ 3523], 60.00th=[ 4293], 00:09:09.768 | 70.00th=[ 4817], 80.00th=[ 5342], 90.00th=[ 5997], 95.00th=[ 6521], 00:09:09.768 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8455], 99.95th=[ 8979], 00:09:09.768 | 99.99th=[11338] 00:09:09.768 bw ( KiB/s): min=60520, max=65472, per=98.79%, avg=63136.00, stdev=2487.85, samples=3 00:09:09.768 iops : min=15130, max=16368, avg=15784.00, stdev=621.96, samples=3 00:09:09.768 write: IOPS=16.0k, BW=62.5MiB/s (65.6MB/s)(125MiB/2001msec); 0 zone resets 00:09:09.768 slat (nsec): min=4287, max=80680, avg=6049.19, stdev=3410.05 00:09:09.768 clat (usec): min=971, max=11374, avg=3991.53, stdev=1380.72 00:09:09.768 lat (usec): min=977, max=11393, avg=3997.58, stdev=1381.86 00:09:09.768 clat percentiles (usec): 00:09:09.768 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2737], 00:09:09.768 | 30.00th=[ 2900], 40.00th=[ 3130], 50.00th=[ 3523], 60.00th=[ 4293], 00:09:09.768 | 70.00th=[ 4817], 80.00th=[ 5342], 90.00th=[ 5997], 95.00th=[ 6521], 00:09:09.768 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 8979], 00:09:09.768 | 99.99th=[11207] 00:09:09.768 bw ( KiB/s): min=60008, max=64776, per=98.07%, avg=62813.33, stdev=2493.20, samples=3 00:09:09.768 iops : min=15002, max=16194, avg=15703.33, stdev=623.30, samples=3 00:09:09.768 lat (usec) : 500=0.01%, 1000=0.01% 00:09:09.768 lat (msec) : 2=0.38%, 4=55.57%, 10=44.02%, 20=0.02% 00:09:09.768 cpu : usr=98.85%, sys=0.05%, ctx=4, majf=0, minf=625 00:09:09.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.768 issued rwts: total=31972,32041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.768 00:09:09.768 Run status group 0 (all jobs): 00:09:09.768 READ: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:09.768 WRITE: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:09.768 ----------------------------------------------------- 00:09:09.768 Suppressions used: 00:09:09.768 count bytes template 00:09:09.768 1 32 /usr/src/fio/parse.c 00:09:09.768 1 8 libtcmalloc_minimal.so 00:09:09.768 ----------------------------------------------------- 00:09:09.768 00:09:09.768 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.768 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:10.027 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:10.028 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:10.028 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:10.028 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:10.289 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:10.289 21:14:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:10.289 21:14:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:10.550 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:10.550 fio-3.35 00:09:10.550 Starting 1 thread 00:09:13.854 00:09:13.854 test: (groupid=0, jobs=1): err= 0: pid=77671: Mon Dec 16 21:15:03 2024 00:09:13.854 read: IOPS=16.6k, BW=64.8MiB/s (67.9MB/s)(130MiB/2001msec) 00:09:13.854 slat (usec): min=3, max=177, avg= 6.21, stdev= 3.19 00:09:13.854 clat (usec): min=297, max=11616, avg=3831.99, stdev=1394.95 00:09:13.854 lat (usec): min=302, max=11671, avg=3838.19, stdev=1396.15 00:09:13.854 clat percentiles (usec): 00:09:13.854 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2704], 00:09:13.854 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3228], 60.00th=[ 3654], 00:09:13.854 | 70.00th=[ 4424], 80.00th=[ 5211], 90.00th=[ 5997], 95.00th=[ 6521], 00:09:13.854 | 99.00th=[ 7504], 99.50th=[ 8225], 99.90th=[ 9372], 99.95th=[10159], 00:09:13.854 | 99.99th=[11469] 00:09:13.854 bw ( KiB/s): min=55048, max=77596, per=100.00%, avg=68998.67, stdev=12190.02, samples=3 00:09:13.854 iops : min=13762, max=19399, avg=17249.67, stdev=3047.51, samples=3 00:09:13.854 write: IOPS=16.6k, BW=64.9MiB/s (68.1MB/s)(130MiB/2001msec); 0 zone resets 00:09:13.854 slat (nsec): min=4048, max=79575, avg=6434.50, stdev=3117.79 00:09:13.854 clat (usec): min=223, max=11533, avg=3851.90, stdev=1393.44 00:09:13.854 lat (usec): min=228, max=11553, avg=3858.33, stdev=1394.60 00:09:13.854 clat percentiles (usec): 00:09:13.854 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2704], 00:09:13.854 | 30.00th=[ 2835], 40.00th=[ 3032], 50.00th=[ 3261], 60.00th=[ 3687], 00:09:13.854 | 70.00th=[ 4490], 80.00th=[ 5211], 90.00th=[ 5997], 95.00th=[ 6521], 00:09:13.854 | 99.00th=[ 7504], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[10159], 00:09:13.854 | 99.99th=[11469] 00:09:13.854 bw ( KiB/s): min=54608, max=77413, per=100.00%, avg=68895.00, stdev=12449.02, samples=3 00:09:13.854 iops : min=13652, max=19353, avg=17223.67, stdev=3112.17, samples=3 00:09:13.854 lat (usec) : 250=0.01%, 500=0.02%, 1000=0.03% 00:09:13.854 lat (msec) : 2=0.47%, 4=63.42%, 10=35.99%, 20=0.06% 00:09:13.854 cpu : usr=99.00%, sys=0.00%, ctx=9, majf=0, minf=625 00:09:13.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.854 issued rwts: total=33190,33256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.854 00:09:13.854 Run status group 0 (all jobs): 00:09:13.854 READ: bw=64.8MiB/s (67.9MB/s), 64.8MiB/s-64.8MiB/s (67.9MB/s-67.9MB/s), io=130MiB (136MB), run=2001-2001msec 00:09:13.854 WRITE: bw=64.9MiB/s (68.1MB/s), 64.9MiB/s-64.9MiB/s (68.1MB/s-68.1MB/s), io=130MiB (136MB), run=2001-2001msec 00:09:13.854 ----------------------------------------------------- 00:09:13.854 Suppressions used: 00:09:13.854 count bytes template 00:09:13.854 1 32 /usr/src/fio/parse.c 00:09:13.854 1 8 libtcmalloc_minimal.so 00:09:13.854 ----------------------------------------------------- 00:09:13.854 00:09:13.854 21:15:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.854 21:15:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:13.854 ************************************ 00:09:13.854 END TEST nvme_fio 00:09:13.854 ************************************ 00:09:13.854 00:09:13.854 real 0m21.399s 00:09:13.855 user 0m15.143s 00:09:13.855 sys 0m9.745s 00:09:13.855 21:15:03 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.855 21:15:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:14.116 00:09:14.116 real 1m28.392s 00:09:14.116 user 3m28.995s 00:09:14.116 sys 0m19.639s 00:09:14.116 21:15:03 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.116 ************************************ 00:09:14.116 END TEST nvme 00:09:14.116 ************************************ 00:09:14.116 21:15:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.116 21:15:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:14.116 21:15:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.116 21:15:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.116 21:15:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.116 21:15:03 -- common/autotest_common.sh@10 -- # set +x 00:09:14.116 ************************************ 00:09:14.116 START TEST nvme_scc 00:09:14.116 ************************************ 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.116 * Looking for test storage... 00:09:14.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.116 --rc genhtml_branch_coverage=1 00:09:14.116 --rc genhtml_function_coverage=1 00:09:14.116 --rc genhtml_legend=1 00:09:14.116 --rc geninfo_all_blocks=1 00:09:14.116 --rc geninfo_unexecuted_blocks=1 00:09:14.116 00:09:14.116 ' 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.116 --rc genhtml_branch_coverage=1 00:09:14.116 --rc genhtml_function_coverage=1 00:09:14.116 --rc genhtml_legend=1 00:09:14.116 --rc geninfo_all_blocks=1 00:09:14.116 --rc geninfo_unexecuted_blocks=1 00:09:14.116 00:09:14.116 ' 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.116 --rc genhtml_branch_coverage=1 00:09:14.116 --rc genhtml_function_coverage=1 00:09:14.116 --rc genhtml_legend=1 00:09:14.116 --rc geninfo_all_blocks=1 00:09:14.116 --rc geninfo_unexecuted_blocks=1 00:09:14.116 00:09:14.116 ' 00:09:14.116 21:15:03 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.116 --rc genhtml_branch_coverage=1 00:09:14.116 --rc genhtml_function_coverage=1 00:09:14.116 --rc genhtml_legend=1 00:09:14.116 --rc geninfo_all_blocks=1 00:09:14.116 --rc geninfo_unexecuted_blocks=1 00:09:14.116 00:09:14.116 ' 00:09:14.116 21:15:03 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.116 21:15:03 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.116 21:15:03 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.116 21:15:03 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.116 21:15:03 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.116 21:15:03 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:14.116 21:15:03 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:14.116 21:15:03 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:14.116 21:15:03 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.117 21:15:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:14.117 21:15:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:14.117 21:15:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:14.117 21:15:03 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:14.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.668 Waiting for block devices as requested 00:09:14.668 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.668 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.975 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.975 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.278 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:20.278 21:15:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:20.278 21:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:20.278 21:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:20.278 21:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.278 21:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:20.278 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.279 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.280 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.281 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.282 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.283 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:20.284 21:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:20.284 21:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:20.284 21:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.284 21:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:20.284 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.285 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.286 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:20.287 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.288 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:20.289 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:20.290 21:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:20.290 21:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:20.290 21:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.290 21:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.290 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.291 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.292 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.293 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.294 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:20.295 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.296 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.297 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:20.298 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.299 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.300 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:20.301 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.302 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:20.303 21:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:20.303 21:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:20.303 21:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.303 21:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.303 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.304 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.305 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:20.306 21:15:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:20.306 21:15:09 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:20.306 21:15:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:20.306 21:15:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:20.306 21:15:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:20.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.448 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.448 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.448 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.448 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.449 21:15:11 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:21.449 21:15:11 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:21.449 21:15:11 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.449 21:15:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:21.449 ************************************ 00:09:21.449 START TEST nvme_simple_copy 00:09:21.449 ************************************ 00:09:21.449 21:15:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:21.709 Initializing NVMe Controllers 00:09:21.709 Attaching to 0000:00:10.0 00:09:21.709 Controller supports SCC. Attached to 0000:00:10.0 00:09:21.709 Namespace ID: 1 size: 6GB 00:09:21.709 Initialization complete. 00:09:21.709 00:09:21.709 Controller QEMU NVMe Ctrl (12340 ) 00:09:21.709 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:21.709 Namespace Block Size:4096 00:09:21.709 Writing LBAs 0 to 63 with Random Data 00:09:21.709 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:21.709 LBAs matching Written Data: 64 00:09:21.709 00:09:21.709 real 0m0.241s 00:09:21.709 user 0m0.093s 00:09:21.709 sys 0m0.046s 00:09:21.709 21:15:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.709 ************************************ 00:09:21.709 END TEST nvme_simple_copy 00:09:21.709 ************************************ 00:09:21.709 21:15:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:21.709 00:09:21.709 real 0m7.724s 00:09:21.709 user 0m1.072s 00:09:21.709 sys 0m1.402s 00:09:21.710 ************************************ 00:09:21.710 END TEST nvme_scc 00:09:21.710 ************************************ 00:09:21.710 21:15:11 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.710 21:15:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:21.710 21:15:11 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:21.710 21:15:11 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:21.710 21:15:11 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:21.710 21:15:11 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:21.710 21:15:11 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:21.710 21:15:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.710 21:15:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.710 21:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:21.710 ************************************ 00:09:21.710 START TEST nvme_fdp 00:09:21.710 ************************************ 00:09:21.710 21:15:11 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:21.971 * Looking for test storage... 00:09:21.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.971 --rc genhtml_branch_coverage=1 00:09:21.971 --rc genhtml_function_coverage=1 00:09:21.971 --rc genhtml_legend=1 00:09:21.971 --rc geninfo_all_blocks=1 00:09:21.971 --rc geninfo_unexecuted_blocks=1 00:09:21.971 00:09:21.971 ' 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.971 --rc genhtml_branch_coverage=1 00:09:21.971 --rc genhtml_function_coverage=1 00:09:21.971 --rc genhtml_legend=1 00:09:21.971 --rc geninfo_all_blocks=1 00:09:21.971 --rc geninfo_unexecuted_blocks=1 00:09:21.971 00:09:21.971 ' 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.971 --rc genhtml_branch_coverage=1 00:09:21.971 --rc genhtml_function_coverage=1 00:09:21.971 --rc genhtml_legend=1 00:09:21.971 --rc geninfo_all_blocks=1 00:09:21.971 --rc geninfo_unexecuted_blocks=1 00:09:21.971 00:09:21.971 ' 00:09:21.971 21:15:11 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.971 --rc genhtml_branch_coverage=1 00:09:21.971 --rc genhtml_function_coverage=1 00:09:21.971 --rc genhtml_legend=1 00:09:21.971 --rc geninfo_all_blocks=1 00:09:21.971 --rc geninfo_unexecuted_blocks=1 00:09:21.971 00:09:21.971 ' 00:09:21.971 21:15:11 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.971 21:15:11 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.971 21:15:11 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.971 21:15:11 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.971 21:15:11 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.971 21:15:11 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:21.971 21:15:11 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:21.971 21:15:11 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:21.971 21:15:11 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.971 21:15:11 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.494 Waiting for block devices as requested 00:09:22.494 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.494 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.755 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.755 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.057 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.057 21:15:17 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:28.057 21:15:17 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:28.057 21:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.057 21:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:28.057 21:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:28.057 21:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:28.057 21:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.058 21:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.058 21:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.058 21:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.058 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.059 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.060 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:28.061 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.062 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:28.063 21:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:28.064 21:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.064 21:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.064 21:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.064 21:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:28.064 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.065 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.066 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:28.067 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:28.068 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:28.069 21:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:28.070 21:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.070 21:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.070 21:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.070 21:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.070 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.071 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.072 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.073 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.074 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.075 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.076 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:28.077 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.078 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:28.079 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:28.080 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.081 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:28.082 21:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.082 21:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:28.082 21:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.082 21:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:28.082 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.083 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.084 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:28.346 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:28.346 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:28.347 21:15:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:28.347 21:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:28.348 21:15:17 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:28.348 21:15:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:28.348 21:15:17 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:28.348 21:15:17 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.179 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.179 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.179 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.179 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.179 21:15:18 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:29.179 21:15:18 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.179 21:15:18 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.179 21:15:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:29.179 ************************************ 00:09:29.179 START TEST nvme_flexible_data_placement 00:09:29.179 ************************************ 00:09:29.179 21:15:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:29.441 Initializing NVMe Controllers 00:09:29.441 Attaching to 0000:00:13.0 00:09:29.441 Controller supports FDP Attached to 0000:00:13.0 00:09:29.441 Namespace ID: 1 Endurance Group ID: 1 00:09:29.441 Initialization complete. 00:09:29.441 00:09:29.441 ================================== 00:09:29.441 == FDP tests for Namespace: #01 == 00:09:29.441 ================================== 00:09:29.441 00:09:29.441 Get Feature: FDP: 00:09:29.441 ================= 00:09:29.441 Enabled: Yes 00:09:29.441 FDP configuration Index: 0 00:09:29.441 00:09:29.441 FDP configurations log page 00:09:29.441 =========================== 00:09:29.441 Number of FDP configurations: 1 00:09:29.441 Version: 0 00:09:29.441 Size: 112 00:09:29.441 FDP Configuration Descriptor: 0 00:09:29.441 Descriptor Size: 96 00:09:29.441 Reclaim Group Identifier format: 2 00:09:29.441 FDP Volatile Write Cache: Not Present 00:09:29.441 FDP Configuration: Valid 00:09:29.441 Vendor Specific Size: 0 00:09:29.441 Number of Reclaim Groups: 2 00:09:29.441 Number of Recalim Unit Handles: 8 00:09:29.441 Max Placement Identifiers: 128 00:09:29.441 Number of Namespaces Suppprted: 256 00:09:29.441 Reclaim unit Nominal Size: 6000000 bytes 00:09:29.441 Estimated Reclaim Unit Time Limit: Not Reported 00:09:29.441 RUH Desc #000: RUH Type: Initially Isolated 00:09:29.441 RUH Desc #001: RUH Type: Initially Isolated 00:09:29.441 RUH Desc #002: RUH Type: Initially Isolated 00:09:29.441 RUH Desc #003: RUH Type: Initially Isolated 00:09:29.441 RUH Desc #004: RUH Type: Initially Isolated 00:09:29.441 RUH Desc #005: RUH Type: Initially Isolated 00:09:29.441 RUH Desc #006: RUH Type: Initially Isolated 00:09:29.441 RUH Desc #007: RUH Type: Initially Isolated 00:09:29.441 00:09:29.441 FDP reclaim unit handle usage log page 00:09:29.441 ====================================== 00:09:29.441 Number of Reclaim Unit Handles: 8 00:09:29.441 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:29.441 RUH Usage Desc #001: RUH Attributes: Unused 00:09:29.441 RUH Usage Desc #002: RUH Attributes: Unused 00:09:29.441 RUH Usage Desc #003: RUH Attributes: Unused 00:09:29.441 RUH Usage Desc #004: RUH Attributes: Unused 00:09:29.441 RUH Usage Desc #005: RUH Attributes: Unused 00:09:29.441 RUH Usage Desc #006: RUH Attributes: Unused 00:09:29.441 RUH Usage Desc #007: RUH Attributes: Unused 00:09:29.441 00:09:29.441 FDP statistics log page 00:09:29.441 ======================= 00:09:29.441 Host bytes with metadata written: 2094346240 00:09:29.441 Media bytes with metadata written: 2095546368 00:09:29.441 Media bytes erased: 0 00:09:29.441 00:09:29.441 FDP Reclaim unit handle status 00:09:29.441 ============================== 00:09:29.441 Number of RUHS descriptors: 2 00:09:29.441 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000012ad 00:09:29.441 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:29.441 00:09:29.441 FDP write on placement id: 0 success 00:09:29.441 00:09:29.441 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:29.441 00:09:29.441 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:29.441 00:09:29.441 Get Feature: FDP Events for Placement handle: #0 00:09:29.441 ======================== 00:09:29.441 Number of FDP Events: 6 00:09:29.441 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:29.441 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:29.441 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:29.441 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:29.441 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:29.441 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:29.441 00:09:29.441 FDP events log page 00:09:29.441 =================== 00:09:29.441 Number of FDP events: 1 00:09:29.441 FDP Event #0: 00:09:29.441 Event Type: RU Not Written to Capacity 00:09:29.441 Placement Identifier: Valid 00:09:29.441 NSID: Valid 00:09:29.441 Location: Valid 00:09:29.441 Placement Identifier: 0 00:09:29.441 Event Timestamp: 5 00:09:29.441 Namespace Identifier: 1 00:09:29.441 Reclaim Group Identifier: 0 00:09:29.441 Reclaim Unit Handle Identifier: 0 00:09:29.441 00:09:29.441 FDP test passed 00:09:29.441 00:09:29.441 real 0m0.218s 00:09:29.441 user 0m0.069s 00:09:29.441 sys 0m0.048s 00:09:29.441 21:15:19 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.441 21:15:19 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:29.441 ************************************ 00:09:29.441 END TEST nvme_flexible_data_placement 00:09:29.441 ************************************ 00:09:29.702 00:09:29.702 real 0m7.740s 00:09:29.702 user 0m1.037s 00:09:29.702 sys 0m1.428s 00:09:29.702 ************************************ 00:09:29.702 END TEST nvme_fdp 00:09:29.702 ************************************ 00:09:29.702 21:15:19 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.702 21:15:19 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:29.702 21:15:19 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:29.702 21:15:19 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:29.702 21:15:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.702 21:15:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.702 21:15:19 -- common/autotest_common.sh@10 -- # set +x 00:09:29.702 ************************************ 00:09:29.702 START TEST nvme_rpc 00:09:29.702 ************************************ 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:29.702 * Looking for test storage... 00:09:29.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.702 21:15:19 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.702 --rc genhtml_branch_coverage=1 00:09:29.702 --rc genhtml_function_coverage=1 00:09:29.702 --rc genhtml_legend=1 00:09:29.702 --rc geninfo_all_blocks=1 00:09:29.702 --rc geninfo_unexecuted_blocks=1 00:09:29.702 00:09:29.702 ' 00:09:29.702 21:15:19 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.702 21:15:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:29.702 21:15:19 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:29.963 21:15:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:29.963 21:15:19 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=79047 00:09:29.963 21:15:19 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:29.963 21:15:19 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 79047 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 79047 ']' 00:09:29.963 21:15:19 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.963 21:15:19 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.963 [2024-12-16 21:15:19.517414] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:29.963 [2024-12-16 21:15:19.517555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79047 ] 00:09:29.963 [2024-12-16 21:15:19.663480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.236 [2024-12-16 21:15:19.695674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.236 [2024-12-16 21:15:19.695739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.809 21:15:20 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.809 21:15:20 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:30.809 21:15:20 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:31.071 Nvme0n1 00:09:31.071 21:15:20 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:31.071 21:15:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:31.333 request: 00:09:31.333 { 00:09:31.333 "bdev_name": "Nvme0n1", 00:09:31.333 "filename": "non_existing_file", 00:09:31.333 "method": "bdev_nvme_apply_firmware", 00:09:31.333 "req_id": 1 00:09:31.333 } 00:09:31.333 Got JSON-RPC error response 00:09:31.333 response: 00:09:31.333 { 00:09:31.333 "code": -32603, 00:09:31.333 "message": "open file failed." 00:09:31.333 } 00:09:31.333 21:15:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:31.333 21:15:20 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:31.333 21:15:20 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:31.595 21:15:21 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:31.595 21:15:21 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 79047 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 79047 ']' 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 79047 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79047 00:09:31.595 killing process with pid 79047 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79047' 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@973 -- # kill 79047 00:09:31.595 21:15:21 nvme_rpc -- common/autotest_common.sh@978 -- # wait 79047 00:09:31.856 ************************************ 00:09:31.856 END TEST nvme_rpc 00:09:31.856 ************************************ 00:09:31.856 00:09:31.856 real 0m2.261s 00:09:31.856 user 0m4.324s 00:09:31.856 sys 0m0.580s 00:09:31.856 21:15:21 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.856 21:15:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.856 21:15:21 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:31.856 21:15:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.856 21:15:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.856 21:15:21 -- common/autotest_common.sh@10 -- # set +x 00:09:31.856 ************************************ 00:09:31.856 START TEST nvme_rpc_timeouts 00:09:31.856 ************************************ 00:09:31.856 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:32.117 * Looking for test storage... 00:09:32.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.117 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.117 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.117 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.117 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:32.117 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:32.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.118 21:15:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.118 --rc genhtml_branch_coverage=1 00:09:32.118 --rc genhtml_function_coverage=1 00:09:32.118 --rc genhtml_legend=1 00:09:32.118 --rc geninfo_all_blocks=1 00:09:32.118 --rc geninfo_unexecuted_blocks=1 00:09:32.118 00:09:32.118 ' 00:09:32.118 21:15:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.118 21:15:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79101 00:09:32.118 21:15:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79101 00:09:32.118 21:15:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79134 00:09:32.118 21:15:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:32.118 21:15:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79134 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79134 ']' 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.118 21:15:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:32.118 21:15:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:32.118 [2024-12-16 21:15:21.777409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:32.118 [2024-12-16 21:15:21.777771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79134 ] 00:09:32.378 [2024-12-16 21:15:21.925055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:32.378 [2024-12-16 21:15:21.956846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.378 [2024-12-16 21:15:21.956918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.950 21:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.950 21:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:32.950 21:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:32.950 Checking default timeout settings: 00:09:32.950 21:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:33.522 21:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:33.522 Making settings changes with rpc: 00:09:33.522 21:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:33.522 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:33.522 Check default vs. modified settings: 00:09:33.522 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79101 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79101 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:34.095 Setting action_on_timeout is changed as expected. 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79101 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79101 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:34.095 Setting timeout_us is changed as expected. 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79101 00:09:34.095 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79101 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:34.096 Setting timeout_admin_us is changed as expected. 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79101 /tmp/settings_modified_79101 00:09:34.096 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79134 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79134 ']' 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79134 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79134 00:09:34.096 killing process with pid 79134 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79134' 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79134 00:09:34.096 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79134 00:09:34.357 RPC TIMEOUT SETTING TEST PASSED. 00:09:34.357 21:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:34.357 ************************************ 00:09:34.357 END TEST nvme_rpc_timeouts 00:09:34.357 ************************************ 00:09:34.357 00:09:34.357 real 0m2.448s 00:09:34.357 user 0m4.857s 00:09:34.357 sys 0m0.568s 00:09:34.357 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.357 21:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:34.357 21:15:24 -- spdk/autotest.sh@239 -- # uname -s 00:09:34.357 21:15:24 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:34.357 21:15:24 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.357 21:15:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.357 21:15:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.357 21:15:24 -- common/autotest_common.sh@10 -- # set +x 00:09:34.618 ************************************ 00:09:34.618 START TEST sw_hotplug 00:09:34.618 ************************************ 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.619 * Looking for test storage... 00:09:34.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.619 21:15:24 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.619 --rc genhtml_branch_coverage=1 00:09:34.619 --rc genhtml_function_coverage=1 00:09:34.619 --rc genhtml_legend=1 00:09:34.619 --rc geninfo_all_blocks=1 00:09:34.619 --rc geninfo_unexecuted_blocks=1 00:09:34.619 00:09:34.619 ' 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.619 --rc genhtml_branch_coverage=1 00:09:34.619 --rc genhtml_function_coverage=1 00:09:34.619 --rc genhtml_legend=1 00:09:34.619 --rc geninfo_all_blocks=1 00:09:34.619 --rc geninfo_unexecuted_blocks=1 00:09:34.619 00:09:34.619 ' 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.619 --rc genhtml_branch_coverage=1 00:09:34.619 --rc genhtml_function_coverage=1 00:09:34.619 --rc genhtml_legend=1 00:09:34.619 --rc geninfo_all_blocks=1 00:09:34.619 --rc geninfo_unexecuted_blocks=1 00:09:34.619 00:09:34.619 ' 00:09:34.619 21:15:24 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.619 --rc genhtml_branch_coverage=1 00:09:34.619 --rc genhtml_function_coverage=1 00:09:34.619 --rc genhtml_legend=1 00:09:34.619 --rc geninfo_all_blocks=1 00:09:34.619 --rc geninfo_unexecuted_blocks=1 00:09:34.619 00:09:34.619 ' 00:09:34.619 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.142 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.142 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.142 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.142 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.142 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:35.142 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:35.142 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:35.142 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:35.142 21:15:24 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:35.142 21:15:24 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:35.143 21:15:24 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.143 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:35.143 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:35.143 21:15:24 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.666 Waiting for block devices as requested 00:09:35.667 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.667 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.928 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.928 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.214 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.214 21:15:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:41.214 21:15:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.476 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:41.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.476 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:41.737 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:42.000 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.000 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.000 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:42.000 21:15:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79979 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:42.261 21:15:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:42.261 21:15:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:42.261 21:15:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:42.261 21:15:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:42.261 21:15:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:42.261 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:42.262 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:42.262 21:15:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:42.262 Initializing NVMe Controllers 00:09:42.262 Attaching to 0000:00:10.0 00:09:42.262 Attaching to 0000:00:11.0 00:09:42.262 Attached to 0000:00:10.0 00:09:42.262 Attached to 0000:00:11.0 00:09:42.262 Initialization complete. Starting I/O... 00:09:42.262 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:42.262 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:42.262 00:09:43.647 QEMU NVMe Ctrl (12340 ): 2346 I/Os completed (+2346) 00:09:43.647 QEMU NVMe Ctrl (12341 ): 2371 I/Os completed (+2371) 00:09:43.647 00:09:44.589 QEMU NVMe Ctrl (12340 ): 5495 I/Os completed (+3149) 00:09:44.589 QEMU NVMe Ctrl (12341 ): 5522 I/Os completed (+3151) 00:09:44.589 00:09:45.533 QEMU NVMe Ctrl (12340 ): 8675 I/Os completed (+3180) 00:09:45.533 QEMU NVMe Ctrl (12341 ): 8702 I/Os completed (+3180) 00:09:45.533 00:09:46.466 QEMU NVMe Ctrl (12340 ): 12357 I/Os completed (+3682) 00:09:46.466 QEMU NVMe Ctrl (12341 ): 12370 I/Os completed (+3668) 00:09:46.466 00:09:47.426 QEMU NVMe Ctrl (12340 ): 16674 I/Os completed (+4317) 00:09:47.426 QEMU NVMe Ctrl (12341 ): 16689 I/Os completed (+4319) 00:09:47.426 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.368 [2024-12-16 21:15:37.757725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:48.368 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:48.368 [2024-12-16 21:15:37.759023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.759105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.759122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.759141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:48.368 [2024-12-16 21:15:37.760981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.761048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.761073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.761089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.368 [2024-12-16 21:15:37.779345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:48.368 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:48.368 [2024-12-16 21:15:37.780490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.780543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.780571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.780588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:48.368 [2024-12-16 21:15:37.781934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.781976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.781995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 [2024-12-16 21:15:37.782013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:48.368 00:09:48.368 21:15:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:48.368 21:15:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.368 21:15:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.368 21:15:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.368 21:15:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:48.368 Attaching to 0000:00:10.0 00:09:48.368 Attached to 0000:00:10.0 00:09:48.629 21:15:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:48.629 21:15:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.629 21:15:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:48.629 Attaching to 0000:00:11.0 00:09:48.629 Attached to 0000:00:11.0 00:09:49.571 QEMU NVMe Ctrl (12340 ): 2752 I/Os completed (+2752) 00:09:49.571 QEMU NVMe Ctrl (12341 ): 2537 I/Os completed (+2537) 00:09:49.571 00:09:50.516 QEMU NVMe Ctrl (12340 ): 5770 I/Os completed (+3018) 00:09:50.516 QEMU NVMe Ctrl (12341 ): 5615 I/Os completed (+3078) 00:09:50.516 00:09:51.461 QEMU NVMe Ctrl (12340 ): 8666 I/Os completed (+2896) 00:09:51.461 QEMU NVMe Ctrl (12341 ): 8545 I/Os completed (+2930) 00:09:51.461 00:09:52.406 QEMU NVMe Ctrl (12340 ): 11734 I/Os completed (+3068) 00:09:52.406 QEMU NVMe Ctrl (12341 ): 11615 I/Os completed (+3070) 00:09:52.406 00:09:53.349 QEMU NVMe Ctrl (12340 ): 14734 I/Os completed (+3000) 00:09:53.349 QEMU NVMe Ctrl (12341 ): 14616 I/Os completed (+3001) 00:09:53.349 00:09:54.293 QEMU NVMe Ctrl (12340 ): 17758 I/Os completed (+3024) 00:09:54.293 QEMU NVMe Ctrl (12341 ): 17648 I/Os completed (+3032) 00:09:54.293 00:09:55.672 QEMU NVMe Ctrl (12340 ): 20876 I/Os completed (+3118) 00:09:55.672 QEMU NVMe Ctrl (12341 ): 20718 I/Os completed (+3070) 00:09:55.672 00:09:56.238 QEMU NVMe Ctrl (12340 ): 25196 I/Os completed (+4320) 00:09:56.238 QEMU NVMe Ctrl (12341 ): 25005 I/Os completed (+4287) 00:09:56.238 00:09:57.615 QEMU NVMe Ctrl (12340 ): 29505 I/Os completed (+4309) 00:09:57.615 QEMU NVMe Ctrl (12341 ): 29315 I/Os completed (+4310) 00:09:57.615 00:09:58.560 QEMU NVMe Ctrl (12340 ): 32576 I/Os completed (+3071) 00:09:58.560 QEMU NVMe Ctrl (12341 ): 32404 I/Os completed (+3089) 00:09:58.560 00:09:59.504 QEMU NVMe Ctrl (12340 ): 35604 I/Os completed (+3028) 00:09:59.504 QEMU NVMe Ctrl (12341 ): 35440 I/Os completed (+3036) 00:09:59.504 00:10:00.446 QEMU NVMe Ctrl (12340 ): 38683 I/Os completed (+3079) 00:10:00.446 QEMU NVMe Ctrl (12341 ): 38520 I/Os completed (+3080) 00:10:00.446 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.446 [2024-12-16 21:15:50.102179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:00.446 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.446 [2024-12-16 21:15:50.103359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.103414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.103431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.103453] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.446 [2024-12-16 21:15:50.104842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.104900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.104915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.104934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.446 [2024-12-16 21:15:50.123412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:00.446 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.446 [2024-12-16 21:15:50.124481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.124535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.124554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.124570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.446 [2024-12-16 21:15:50.125843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.125890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.125911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 [2024-12-16 21:15:50.125925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.446 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:00.446 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:00.446 EAL: Scan for (pci) bus failed. 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.708 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:00.708 Attaching to 0000:00:10.0 00:10:00.708 Attached to 0000:00:10.0 00:10:00.970 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.970 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.970 21:15:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.970 Attaching to 0000:00:11.0 00:10:00.970 Attached to 0000:00:11.0 00:10:01.542 QEMU NVMe Ctrl (12340 ): 1748 I/Os completed (+1748) 00:10:01.542 QEMU NVMe Ctrl (12341 ): 1500 I/Os completed (+1500) 00:10:01.542 00:10:02.486 QEMU NVMe Ctrl (12340 ): 4896 I/Os completed (+3148) 00:10:02.486 QEMU NVMe Ctrl (12341 ): 4652 I/Os completed (+3152) 00:10:02.486 00:10:03.432 QEMU NVMe Ctrl (12340 ): 8048 I/Os completed (+3152) 00:10:03.432 QEMU NVMe Ctrl (12341 ): 7803 I/Os completed (+3151) 00:10:03.432 00:10:04.382 QEMU NVMe Ctrl (12340 ): 11156 I/Os completed (+3108) 00:10:04.382 QEMU NVMe Ctrl (12341 ): 10918 I/Os completed (+3115) 00:10:04.382 00:10:05.327 QEMU NVMe Ctrl (12340 ): 14312 I/Os completed (+3156) 00:10:05.327 QEMU NVMe Ctrl (12341 ): 14074 I/Os completed (+3156) 00:10:05.327 00:10:06.272 QEMU NVMe Ctrl (12340 ): 17372 I/Os completed (+3060) 00:10:06.272 QEMU NVMe Ctrl (12341 ): 17137 I/Os completed (+3063) 00:10:06.272 00:10:07.662 QEMU NVMe Ctrl (12340 ): 20480 I/Os completed (+3108) 00:10:07.662 QEMU NVMe Ctrl (12341 ): 20245 I/Os completed (+3108) 00:10:07.662 00:10:08.236 QEMU NVMe Ctrl (12340 ): 23604 I/Os completed (+3124) 00:10:08.236 QEMU NVMe Ctrl (12341 ): 23369 I/Os completed (+3124) 00:10:08.236 00:10:09.622 QEMU NVMe Ctrl (12340 ): 26793 I/Os completed (+3189) 00:10:09.622 QEMU NVMe Ctrl (12341 ): 26549 I/Os completed (+3180) 00:10:09.622 00:10:10.566 QEMU NVMe Ctrl (12340 ): 30492 I/Os completed (+3699) 00:10:10.566 QEMU NVMe Ctrl (12341 ): 30233 I/Os completed (+3684) 00:10:10.566 00:10:11.511 QEMU NVMe Ctrl (12340 ): 33500 I/Os completed (+3008) 00:10:11.511 QEMU NVMe Ctrl (12341 ): 33250 I/Os completed (+3017) 00:10:11.511 00:10:12.456 QEMU NVMe Ctrl (12340 ): 36504 I/Os completed (+3004) 00:10:12.456 QEMU NVMe Ctrl (12341 ): 36267 I/Os completed (+3017) 00:10:12.456 00:10:13.029 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:13.029 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:13.029 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.029 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.029 [2024-12-16 21:16:02.467272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:13.029 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:13.029 [2024-12-16 21:16:02.470076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 [2024-12-16 21:16:02.470150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 [2024-12-16 21:16:02.470167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 [2024-12-16 21:16:02.470192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.029 [2024-12-16 21:16:02.471914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 [2024-12-16 21:16:02.471977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 [2024-12-16 21:16:02.471992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 [2024-12-16 21:16:02.472006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.029 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.029 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.029 [2024-12-16 21:16:02.491808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:13.029 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:13.030 [2024-12-16 21:16:02.493046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 [2024-12-16 21:16:02.493088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 [2024-12-16 21:16:02.493105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 [2024-12-16 21:16:02.493119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.030 [2024-12-16 21:16:02.494299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 [2024-12-16 21:16:02.494346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 [2024-12-16 21:16:02.494363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 [2024-12-16 21:16:02.494376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.030 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:13.030 EAL: Scan for (pci) bus failed. 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.030 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:13.030 Attaching to 0000:00:10.0 00:10:13.030 Attached to 0000:00:10.0 00:10:13.292 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:13.292 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.292 21:16:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.292 Attaching to 0000:00:11.0 00:10:13.292 Attached to 0000:00:11.0 00:10:13.292 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.292 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.292 [2024-12-16 21:16:02.820596] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:25.545 21:16:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.545 21:16:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.545 21:16:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.06 00:10:25.545 21:16:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.06 00:10:25.545 21:16:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:25.545 21:16:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.06 00:10:25.545 21:16:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.06 2 00:10:25.545 remove_attach_helper took 43.06s to complete (handling 2 nvme drive(s)) 21:16:14 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79979 00:10:32.118 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79979) - No such process 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79979 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80532 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80532 00:10:32.118 21:16:20 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:32.118 21:16:20 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80532 ']' 00:10:32.118 21:16:20 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.118 21:16:20 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.118 21:16:20 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.118 21:16:20 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.118 21:16:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:32.118 [2024-12-16 21:16:20.915063] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:32.118 [2024-12-16 21:16:20.915210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80532 ] 00:10:32.118 [2024-12-16 21:16:21.060884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.118 [2024-12-16 21:16:21.089456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:32.118 21:16:21 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:32.118 21:16:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.688 21:16:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.688 21:16:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.688 21:16:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:38.688 21:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:38.688 [2024-12-16 21:16:27.873647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:38.688 [2024-12-16 21:16:27.874697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.688 [2024-12-16 21:16:27.874728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.688 [2024-12-16 21:16:27.874740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.688 [2024-12-16 21:16:27.874754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.688 [2024-12-16 21:16:27.874762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.688 [2024-12-16 21:16:27.874769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.688 [2024-12-16 21:16:27.874778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.688 [2024-12-16 21:16:27.874784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.688 [2024-12-16 21:16:27.874792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.688 [2024-12-16 21:16:27.874799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.688 [2024-12-16 21:16:27.874806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.688 [2024-12-16 21:16:27.874814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.688 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:38.688 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.688 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.688 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.688 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.688 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.688 21:16:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.688 21:16:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.688 [2024-12-16 21:16:28.373930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.688 [2024-12-16 21:16:28.374939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.688 [2024-12-16 21:16:28.374970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.688 [2024-12-16 21:16:28.374980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.688 [2024-12-16 21:16:28.374991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.689 [2024-12-16 21:16:28.374998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.689 [2024-12-16 21:16:28.375006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.689 [2024-12-16 21:16:28.375012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.689 [2024-12-16 21:16:28.375020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.689 [2024-12-16 21:16:28.375026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.689 [2024-12-16 21:16:28.375036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.689 [2024-12-16 21:16:28.375043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.689 [2024-12-16 21:16:28.375050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.689 21:16:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.947 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:38.947 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:39.205 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:39.205 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:39.205 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:39.464 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.464 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.464 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.464 21:16:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.464 21:16:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.464 21:16:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.464 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:39.464 21:16:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:39.464 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:39.723 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.723 21:16:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.924 21:16:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.924 21:16:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.924 21:16:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.924 21:16:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.924 21:16:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.924 21:16:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:51.924 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.924 [2024-12-16 21:16:41.274145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.924 [2024-12-16 21:16:41.275236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.924 [2024-12-16 21:16:41.275269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.924 [2024-12-16 21:16:41.275281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.924 [2024-12-16 21:16:41.275293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.924 [2024-12-16 21:16:41.275302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.924 [2024-12-16 21:16:41.275309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.925 [2024-12-16 21:16:41.275317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.925 [2024-12-16 21:16:41.275323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.925 [2024-12-16 21:16:41.275331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.925 [2024-12-16 21:16:41.275337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.925 [2024-12-16 21:16:41.275346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.925 [2024-12-16 21:16:41.275352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.183 [2024-12-16 21:16:41.674152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:52.183 [2024-12-16 21:16:41.675153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.183 [2024-12-16 21:16:41.675184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.183 [2024-12-16 21:16:41.675194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.183 [2024-12-16 21:16:41.675206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.183 [2024-12-16 21:16:41.675213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.183 [2024-12-16 21:16:41.675222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.183 [2024-12-16 21:16:41.675228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.183 [2024-12-16 21:16:41.675236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.183 [2024-12-16 21:16:41.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.183 [2024-12-16 21:16:41.675251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.183 [2024-12-16 21:16:41.675257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.183 [2024-12-16 21:16:41.675265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.183 21:16:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.183 21:16:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.183 21:16:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:52.183 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.441 21:16:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.441 21:16:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.441 21:16:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.441 21:16:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.640 21:16:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.640 21:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.640 21:16:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.640 21:16:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.640 21:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.640 21:16:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:04.640 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.640 [2024-12-16 21:16:54.174366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.640 [2024-12-16 21:16:54.175423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.640 [2024-12-16 21:16:54.175528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.640 [2024-12-16 21:16:54.175550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.640 [2024-12-16 21:16:54.175563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.640 [2024-12-16 21:16:54.175571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.640 [2024-12-16 21:16:54.175578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.640 [2024-12-16 21:16:54.175588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.640 [2024-12-16 21:16:54.175594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.640 [2024-12-16 21:16:54.175602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.640 [2024-12-16 21:16:54.175608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.640 [2024-12-16 21:16:54.175616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.640 [2024-12-16 21:16:54.175622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.898 [2024-12-16 21:16:54.574370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.898 [2024-12-16 21:16:54.575515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.898 [2024-12-16 21:16:54.575546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.898 [2024-12-16 21:16:54.575555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.898 [2024-12-16 21:16:54.575566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.898 [2024-12-16 21:16:54.575572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.898 [2024-12-16 21:16:54.575582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.899 [2024-12-16 21:16:54.575588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.899 [2024-12-16 21:16:54.575596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.899 [2024-12-16 21:16:54.575602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.899 [2024-12-16 21:16:54.575610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.899 [2024-12-16 21:16:54.575616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.899 [2024-12-16 21:16:54.575623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.156 21:16:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.156 21:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.156 21:16:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:05.156 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:05.414 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.414 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.414 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.414 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:05.414 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.414 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.414 21:16:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:11:17.615 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:11:17.615 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 21:17:06 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.615 21:17:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:17.615 21:17:07 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:17.615 21:17:07 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:24.208 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.208 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.209 21:17:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.209 21:17:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.209 21:17:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.209 [2024-12-16 21:17:13.114680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:24.209 [2024-12-16 21:17:13.115525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.115620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.115695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 [2024-12-16 21:17:13.115836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.115859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.115886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 [2024-12-16 21:17:13.115912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.115964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.115994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 [2024-12-16 21:17:13.116018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.116034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.209 [2024-12-16 21:17:13.614680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:24.209 [2024-12-16 21:17:13.615485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.615584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.615663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 [2024-12-16 21:17:13.615724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.615745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.615775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 [2024-12-16 21:17:13.615800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.615850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.615875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 [2024-12-16 21:17:13.615899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.209 [2024-12-16 21:17:13.615914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.209 [2024-12-16 21:17:13.615970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.209 21:17:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.209 21:17:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.209 21:17:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.209 21:17:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.412 21:17:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.412 21:17:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.412 21:17:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.412 21:17:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.412 21:17:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.412 21:17:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.412 21:17:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.412 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.412 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.412 [2024-12-16 21:17:26.014901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.412 [2024-12-16 21:17:26.015742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.412 [2024-12-16 21:17:26.015828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.412 [2024-12-16 21:17:26.015890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.412 [2024-12-16 21:17:26.015998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.412 [2024-12-16 21:17:26.016021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.412 [2024-12-16 21:17:26.016048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.412 [2024-12-16 21:17:26.016072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.412 [2024-12-16 21:17:26.016122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.412 [2024-12-16 21:17:26.016237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.412 [2024-12-16 21:17:26.016322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.412 [2024-12-16 21:17:26.016340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.412 [2024-12-16 21:17:26.016363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.979 21:17:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.979 21:17:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.979 21:17:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:36.979 21:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.979 [2024-12-16 21:17:26.614911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.979 [2024-12-16 21:17:26.615728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.979 [2024-12-16 21:17:26.615826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.979 [2024-12-16 21:17:26.615889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.979 [2024-12-16 21:17:26.615946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.979 [2024-12-16 21:17:26.615965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.979 [2024-12-16 21:17:26.615994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.979 [2024-12-16 21:17:26.616050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.979 [2024-12-16 21:17:26.616134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.979 [2024-12-16 21:17:26.616158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.979 [2024-12-16 21:17:26.616181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.979 [2024-12-16 21:17:26.616197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.979 [2024-12-16 21:17:26.616254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.545 21:17:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.545 21:17:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.545 21:17:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.545 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.803 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.803 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.803 21:17:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.002 21:17:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.002 21:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.002 21:17:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.002 21:17:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.002 21:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.002 21:17:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:50.002 [2024-12-16 21:17:39.415129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:50.002 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:50.002 [2024-12-16 21:17:39.415880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.002 [2024-12-16 21:17:39.415910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.002 [2024-12-16 21:17:39.415922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.002 [2024-12-16 21:17:39.415934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.002 [2024-12-16 21:17:39.415945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.002 [2024-12-16 21:17:39.415951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.002 [2024-12-16 21:17:39.415959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.002 [2024-12-16 21:17:39.415966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.002 [2024-12-16 21:17:39.415973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.002 [2024-12-16 21:17:39.415979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.002 [2024-12-16 21:17:39.415989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.002 [2024-12-16 21:17:39.415995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.261 [2024-12-16 21:17:39.815132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:50.261 [2024-12-16 21:17:39.815866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.261 [2024-12-16 21:17:39.815896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.261 [2024-12-16 21:17:39.815905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.261 [2024-12-16 21:17:39.815916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.261 [2024-12-16 21:17:39.815924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.261 [2024-12-16 21:17:39.815932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.261 [2024-12-16 21:17:39.815939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.261 [2024-12-16 21:17:39.815948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.261 [2024-12-16 21:17:39.815955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.261 [2024-12-16 21:17:39.815962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.261 [2024-12-16 21:17:39.815968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.261 [2024-12-16 21:17:39.815976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.261 21:17:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.261 21:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.261 21:17:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:50.261 21:17:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.519 21:17:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:12:02.713 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:02.713 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80532 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80532 ']' 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80532 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80532 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80532' 00:12:02.713 killing process with pid 80532 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80532 00:12:02.713 21:17:52 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80532 00:12:02.971 21:17:52 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:03.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.805 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.805 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.805 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.805 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.805 00:12:03.805 real 2m29.388s 00:12:03.805 user 1m49.497s 00:12:03.805 sys 0m18.354s 00:12:03.805 ************************************ 00:12:03.805 END TEST sw_hotplug 00:12:03.805 ************************************ 00:12:03.805 21:17:53 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.805 21:17:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.069 21:17:53 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:04.069 21:17:53 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.069 21:17:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.069 21:17:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.069 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:12:04.069 ************************************ 00:12:04.069 START TEST nvme_xnvme 00:12:04.069 ************************************ 00:12:04.069 21:17:53 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.069 * Looking for test storage... 00:12:04.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.069 21:17:53 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.069 21:17:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:04.069 21:17:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.069 21:17:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.069 21:17:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:04.070 21:17:53 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.070 21:17:53 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.070 21:17:53 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.070 21:17:53 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:04.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.070 --rc genhtml_branch_coverage=1 00:12:04.070 --rc genhtml_function_coverage=1 00:12:04.070 --rc genhtml_legend=1 00:12:04.070 --rc geninfo_all_blocks=1 00:12:04.070 --rc geninfo_unexecuted_blocks=1 00:12:04.070 00:12:04.070 ' 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:04.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.070 --rc genhtml_branch_coverage=1 00:12:04.070 --rc genhtml_function_coverage=1 00:12:04.070 --rc genhtml_legend=1 00:12:04.070 --rc geninfo_all_blocks=1 00:12:04.070 --rc geninfo_unexecuted_blocks=1 00:12:04.070 00:12:04.070 ' 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:04.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.070 --rc genhtml_branch_coverage=1 00:12:04.070 --rc genhtml_function_coverage=1 00:12:04.070 --rc genhtml_legend=1 00:12:04.070 --rc geninfo_all_blocks=1 00:12:04.070 --rc geninfo_unexecuted_blocks=1 00:12:04.070 00:12:04.070 ' 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:04.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.070 --rc genhtml_branch_coverage=1 00:12:04.070 --rc genhtml_function_coverage=1 00:12:04.070 --rc genhtml_legend=1 00:12:04.070 --rc geninfo_all_blocks=1 00:12:04.070 --rc geninfo_unexecuted_blocks=1 00:12:04.070 00:12:04.070 ' 00:12:04.070 21:17:53 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:04.070 21:17:53 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:04.070 21:17:53 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:04.070 21:17:53 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:04.071 21:17:53 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:04.071 21:17:53 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:04.071 #define SPDK_CONFIG_H 00:12:04.071 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:04.071 #define SPDK_CONFIG_APPS 1 00:12:04.071 #define SPDK_CONFIG_ARCH native 00:12:04.071 #define SPDK_CONFIG_ASAN 1 00:12:04.071 #undef SPDK_CONFIG_AVAHI 00:12:04.071 #undef SPDK_CONFIG_CET 00:12:04.071 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:04.071 #define SPDK_CONFIG_COVERAGE 1 00:12:04.071 #define SPDK_CONFIG_CROSS_PREFIX 00:12:04.071 #undef SPDK_CONFIG_CRYPTO 00:12:04.071 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:04.071 #undef SPDK_CONFIG_CUSTOMOCF 00:12:04.071 #undef SPDK_CONFIG_DAOS 00:12:04.071 #define SPDK_CONFIG_DAOS_DIR 00:12:04.071 #define SPDK_CONFIG_DEBUG 1 00:12:04.071 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:04.071 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:04.071 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:04.071 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:04.071 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:04.071 #undef SPDK_CONFIG_DPDK_UADK 00:12:04.071 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:04.071 #define SPDK_CONFIG_EXAMPLES 1 00:12:04.071 #undef SPDK_CONFIG_FC 00:12:04.071 #define SPDK_CONFIG_FC_PATH 00:12:04.071 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:04.071 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:04.071 #define SPDK_CONFIG_FSDEV 1 00:12:04.071 #undef SPDK_CONFIG_FUSE 00:12:04.071 #undef SPDK_CONFIG_FUZZER 00:12:04.071 #define SPDK_CONFIG_FUZZER_LIB 00:12:04.071 #undef SPDK_CONFIG_GOLANG 00:12:04.071 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:04.071 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:04.071 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:04.071 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:04.071 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:04.071 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:04.071 #undef SPDK_CONFIG_HAVE_LZ4 00:12:04.071 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:04.071 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:04.071 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:04.071 #define SPDK_CONFIG_IDXD 1 00:12:04.071 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:04.071 #undef SPDK_CONFIG_IPSEC_MB 00:12:04.071 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:04.071 #define SPDK_CONFIG_ISAL 1 00:12:04.071 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:04.071 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:04.071 #define SPDK_CONFIG_LIBDIR 00:12:04.071 #undef SPDK_CONFIG_LTO 00:12:04.071 #define SPDK_CONFIG_MAX_LCORES 128 00:12:04.071 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:04.071 #define SPDK_CONFIG_NVME_CUSE 1 00:12:04.071 #undef SPDK_CONFIG_OCF 00:12:04.071 #define SPDK_CONFIG_OCF_PATH 00:12:04.071 #define SPDK_CONFIG_OPENSSL_PATH 00:12:04.071 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:04.071 #define SPDK_CONFIG_PGO_DIR 00:12:04.071 #undef SPDK_CONFIG_PGO_USE 00:12:04.071 #define SPDK_CONFIG_PREFIX /usr/local 00:12:04.071 #undef SPDK_CONFIG_RAID5F 00:12:04.071 #undef SPDK_CONFIG_RBD 00:12:04.071 #define SPDK_CONFIG_RDMA 1 00:12:04.071 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:04.071 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:04.071 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:04.071 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:04.071 #define SPDK_CONFIG_SHARED 1 00:12:04.071 #undef SPDK_CONFIG_SMA 00:12:04.071 #define SPDK_CONFIG_TESTS 1 00:12:04.071 #undef SPDK_CONFIG_TSAN 00:12:04.071 #define SPDK_CONFIG_UBLK 1 00:12:04.071 #define SPDK_CONFIG_UBSAN 1 00:12:04.071 #undef SPDK_CONFIG_UNIT_TESTS 00:12:04.071 #undef SPDK_CONFIG_URING 00:12:04.071 #define SPDK_CONFIG_URING_PATH 00:12:04.071 #undef SPDK_CONFIG_URING_ZNS 00:12:04.071 #undef SPDK_CONFIG_USDT 00:12:04.071 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:04.071 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:04.071 #undef SPDK_CONFIG_VFIO_USER 00:12:04.071 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:04.071 #define SPDK_CONFIG_VHOST 1 00:12:04.071 #define SPDK_CONFIG_VIRTIO 1 00:12:04.071 #undef SPDK_CONFIG_VTUNE 00:12:04.071 #define SPDK_CONFIG_VTUNE_DIR 00:12:04.071 #define SPDK_CONFIG_WERROR 1 00:12:04.071 #define SPDK_CONFIG_WPDK_DIR 00:12:04.071 #define SPDK_CONFIG_XNVME 1 00:12:04.071 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:04.071 21:17:53 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:04.071 21:17:53 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.071 21:17:53 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.071 21:17:53 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.071 21:17:53 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.071 21:17:53 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.071 21:17:53 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.071 21:17:53 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.071 21:17:53 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.071 21:17:53 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:04.072 21:17:53 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:04.072 21:17:53 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:04.072 21:17:53 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:04.073 21:17:53 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81895 ]] 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81895 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:04.074 21:17:53 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.I4TnxB 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.I4TnxB/tests/xnvme /tmp/spdk.I4TnxB 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13373001728 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6209400832 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13373001728 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6209400832 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:04.337 21:17:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98497052672 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1205727232 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:04.338 * Looking for test storage... 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13373001728 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:04.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.338 --rc genhtml_branch_coverage=1 00:12:04.338 --rc genhtml_function_coverage=1 00:12:04.338 --rc genhtml_legend=1 00:12:04.338 --rc geninfo_all_blocks=1 00:12:04.338 --rc geninfo_unexecuted_blocks=1 00:12:04.338 00:12:04.338 ' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:04.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.338 --rc genhtml_branch_coverage=1 00:12:04.338 --rc genhtml_function_coverage=1 00:12:04.338 --rc genhtml_legend=1 00:12:04.338 --rc geninfo_all_blocks=1 00:12:04.338 --rc geninfo_unexecuted_blocks=1 00:12:04.338 00:12:04.338 ' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:04.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.338 --rc genhtml_branch_coverage=1 00:12:04.338 --rc genhtml_function_coverage=1 00:12:04.338 --rc genhtml_legend=1 00:12:04.338 --rc geninfo_all_blocks=1 00:12:04.338 --rc geninfo_unexecuted_blocks=1 00:12:04.338 00:12:04.338 ' 00:12:04.338 21:17:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:04.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.338 --rc genhtml_branch_coverage=1 00:12:04.338 --rc genhtml_function_coverage=1 00:12:04.338 --rc genhtml_legend=1 00:12:04.338 --rc geninfo_all_blocks=1 00:12:04.338 --rc geninfo_unexecuted_blocks=1 00:12:04.338 00:12:04.338 ' 00:12:04.338 21:17:53 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.338 21:17:53 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.338 21:17:53 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.338 21:17:53 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.338 21:17:53 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.338 21:17:53 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:04.338 21:17:53 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.338 21:17:53 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:04.338 21:17:53 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:04.338 21:17:53 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:04.338 21:17:53 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:04.339 21:17:53 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:04.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.863 Waiting for block devices as requested 00:12:04.863 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.863 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.125 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.125 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.418 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:10.418 21:17:59 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:10.743 21:18:00 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:10.743 21:18:00 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:11.026 21:18:00 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:11.026 21:18:00 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:11.026 No valid GPT data, bailing 00:12:11.026 21:18:00 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:11.026 21:18:00 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:11.026 21:18:00 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:11.026 21:18:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:11.026 21:18:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.026 21:18:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.026 21:18:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.026 ************************************ 00:12:11.026 START TEST xnvme_rpc 00:12:11.026 ************************************ 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82284 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82284 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82284 ']' 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.026 21:18:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.026 [2024-12-16 21:18:00.588865] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:11.026 [2024-12-16 21:18:00.589132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82284 ] 00:12:11.288 [2024-12-16 21:18:00.732581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.288 [2024-12-16 21:18:00.762237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.861 xnvme_bdev 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:11.861 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.862 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82284 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82284 ']' 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82284 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82284 00:12:12.123 killing process with pid 82284 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82284' 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82284 00:12:12.123 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82284 00:12:12.386 00:12:12.386 real 0m1.404s 00:12:12.386 user 0m1.492s 00:12:12.386 sys 0m0.384s 00:12:12.386 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.386 ************************************ 00:12:12.386 END TEST xnvme_rpc 00:12:12.386 ************************************ 00:12:12.386 21:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.386 21:18:01 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:12.386 21:18:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.386 21:18:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.386 21:18:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:12.386 ************************************ 00:12:12.386 START TEST xnvme_bdevperf 00:12:12.386 ************************************ 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:12.386 21:18:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:12.386 { 00:12:12.386 "subsystems": [ 00:12:12.386 { 00:12:12.386 "subsystem": "bdev", 00:12:12.386 "config": [ 00:12:12.386 { 00:12:12.386 "params": { 00:12:12.386 "io_mechanism": "libaio", 00:12:12.386 "conserve_cpu": false, 00:12:12.386 "filename": "/dev/nvme0n1", 00:12:12.386 "name": "xnvme_bdev" 00:12:12.386 }, 00:12:12.386 "method": "bdev_xnvme_create" 00:12:12.386 }, 00:12:12.386 { 00:12:12.386 "method": "bdev_wait_for_examine" 00:12:12.386 } 00:12:12.386 ] 00:12:12.386 } 00:12:12.386 ] 00:12:12.386 } 00:12:12.386 [2024-12-16 21:18:02.064275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:12.386 [2024-12-16 21:18:02.064413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82336 ] 00:12:12.647 [2024-12-16 21:18:02.211842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.647 [2024-12-16 21:18:02.241429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.909 Running I/O for 5 seconds... 00:12:14.800 23444.00 IOPS, 91.58 MiB/s [2024-12-16T21:18:05.445Z] 23603.00 IOPS, 92.20 MiB/s [2024-12-16T21:18:06.391Z] 23473.67 IOPS, 91.69 MiB/s [2024-12-16T21:18:07.780Z] 22893.25 IOPS, 89.43 MiB/s 00:12:18.080 Latency(us) 00:12:18.080 [2024-12-16T21:18:07.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.080 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:18.080 xnvme_bdev : 5.00 22731.05 88.79 0.00 0.00 2808.87 415.90 7763.50 00:12:18.080 [2024-12-16T21:18:07.780Z] =================================================================================================================== 00:12:18.080 [2024-12-16T21:18:07.780Z] Total : 22731.05 88.79 0.00 0.00 2808.87 415.90 7763.50 00:12:18.080 21:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:18.080 21:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:18.080 21:18:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:18.080 21:18:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:18.080 21:18:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:18.080 { 00:12:18.080 "subsystems": [ 00:12:18.080 { 00:12:18.080 "subsystem": "bdev", 00:12:18.080 "config": [ 00:12:18.080 { 00:12:18.080 "params": { 00:12:18.080 "io_mechanism": "libaio", 00:12:18.080 "conserve_cpu": false, 00:12:18.080 "filename": "/dev/nvme0n1", 00:12:18.080 "name": "xnvme_bdev" 00:12:18.080 }, 00:12:18.080 "method": "bdev_xnvme_create" 00:12:18.080 }, 00:12:18.080 { 00:12:18.080 "method": "bdev_wait_for_examine" 00:12:18.080 } 00:12:18.080 ] 00:12:18.080 } 00:12:18.080 ] 00:12:18.080 } 00:12:18.080 [2024-12-16 21:18:07.706458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:18.080 [2024-12-16 21:18:07.706597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82411 ] 00:12:18.342 [2024-12-16 21:18:07.852835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.342 [2024-12-16 21:18:07.891718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.604 Running I/O for 5 seconds... 00:12:20.499 29311.00 IOPS, 114.50 MiB/s [2024-12-16T21:18:11.144Z] 27854.50 IOPS, 108.81 MiB/s [2024-12-16T21:18:12.089Z] 28222.67 IOPS, 110.24 MiB/s [2024-12-16T21:18:13.478Z] 27738.00 IOPS, 108.35 MiB/s [2024-12-16T21:18:13.478Z] 27846.60 IOPS, 108.78 MiB/s 00:12:23.779 Latency(us) 00:12:23.779 [2024-12-16T21:18:13.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.779 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:23.779 xnvme_bdev : 5.01 27818.02 108.66 0.00 0.00 2295.86 475.77 6074.68 00:12:23.779 [2024-12-16T21:18:13.479Z] =================================================================================================================== 00:12:23.779 [2024-12-16T21:18:13.479Z] Total : 27818.02 108.66 0.00 0.00 2295.86 475.77 6074.68 00:12:23.779 ************************************ 00:12:23.779 END TEST xnvme_bdevperf 00:12:23.779 ************************************ 00:12:23.779 00:12:23.779 real 0m11.274s 00:12:23.779 user 0m3.077s 00:12:23.779 sys 0m6.966s 00:12:23.779 21:18:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.779 21:18:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:23.779 21:18:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:23.779 21:18:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.779 21:18:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.779 21:18:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.779 ************************************ 00:12:23.779 START TEST xnvme_fio_plugin 00:12:23.779 ************************************ 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:23.779 21:18:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.779 { 00:12:23.779 "subsystems": [ 00:12:23.779 { 00:12:23.779 "subsystem": "bdev", 00:12:23.779 "config": [ 00:12:23.779 { 00:12:23.779 "params": { 00:12:23.779 "io_mechanism": "libaio", 00:12:23.779 "conserve_cpu": false, 00:12:23.779 "filename": "/dev/nvme0n1", 00:12:23.779 "name": "xnvme_bdev" 00:12:23.779 }, 00:12:23.779 "method": "bdev_xnvme_create" 00:12:23.779 }, 00:12:23.779 { 00:12:23.779 "method": "bdev_wait_for_examine" 00:12:23.779 } 00:12:23.779 ] 00:12:23.779 } 00:12:23.779 ] 00:12:23.779 } 00:12:24.041 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:24.041 fio-3.35 00:12:24.041 Starting 1 thread 00:12:29.339 00:12:29.339 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82514: Mon Dec 16 21:18:18 2024 00:12:29.339 read: IOPS=29.8k, BW=116MiB/s (122MB/s)(582MiB/5001msec) 00:12:29.339 slat (usec): min=4, max=2052, avg=23.34, stdev=105.90 00:12:29.339 clat (usec): min=113, max=5744, avg=1511.95, stdev=526.36 00:12:29.339 lat (usec): min=213, max=5864, avg=1535.29, stdev=513.54 00:12:29.339 clat percentiles (usec): 00:12:29.339 | 1.00th=[ 310], 5.00th=[ 676], 10.00th=[ 865], 20.00th=[ 1090], 00:12:29.339 | 30.00th=[ 1254], 40.00th=[ 1385], 50.00th=[ 1500], 60.00th=[ 1614], 00:12:29.339 | 70.00th=[ 1745], 80.00th=[ 1893], 90.00th=[ 2114], 95.00th=[ 2343], 00:12:29.339 | 99.00th=[ 3064], 99.50th=[ 3359], 99.90th=[ 3982], 99.95th=[ 4228], 00:12:29.339 | 99.99th=[ 4883] 00:12:29.339 bw ( KiB/s): min=116864, max=124848, per=100.00%, avg=120533.33, stdev=2631.01, samples=9 00:12:29.339 iops : min=29216, max=31212, avg=30133.22, stdev=657.78, samples=9 00:12:29.339 lat (usec) : 250=0.52%, 500=1.68%, 750=4.61%, 1000=8.57% 00:12:29.339 lat (msec) : 2=69.93%, 4=14.60%, 10=0.09% 00:12:29.339 cpu : usr=42.58%, sys=49.84%, ctx=16, majf=0, minf=1065 00:12:29.339 IO depths : 1=0.6%, 2=1.4%, 4=3.3%, 8=8.6%, 16=22.9%, 32=61.2%, >=64=2.1% 00:12:29.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.339 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:29.339 issued rwts: total=148866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:29.339 00:12:29.339 Run status group 0 (all jobs): 00:12:29.339 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=582MiB (610MB), run=5001-5001msec 00:12:29.915 ----------------------------------------------------- 00:12:29.915 Suppressions used: 00:12:29.915 count bytes template 00:12:29.915 1 11 /usr/src/fio/parse.c 00:12:29.915 1 8 libtcmalloc_minimal.so 00:12:29.915 1 904 libcrypto.so 00:12:29.915 ----------------------------------------------------- 00:12:29.915 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:29.915 21:18:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:29.915 { 00:12:29.915 "subsystems": [ 00:12:29.915 { 00:12:29.915 "subsystem": "bdev", 00:12:29.915 "config": [ 00:12:29.915 { 00:12:29.915 "params": { 00:12:29.915 "io_mechanism": "libaio", 00:12:29.915 "conserve_cpu": false, 00:12:29.915 "filename": "/dev/nvme0n1", 00:12:29.915 "name": "xnvme_bdev" 00:12:29.915 }, 00:12:29.915 "method": "bdev_xnvme_create" 00:12:29.915 }, 00:12:29.915 { 00:12:29.915 "method": "bdev_wait_for_examine" 00:12:29.915 } 00:12:29.915 ] 00:12:29.915 } 00:12:29.915 ] 00:12:29.915 } 00:12:29.915 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:29.915 fio-3.35 00:12:29.915 Starting 1 thread 00:12:36.506 00:12:36.506 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82600: Mon Dec 16 21:18:25 2024 00:12:36.506 write: IOPS=29.9k, BW=117MiB/s (122MB/s)(583MiB/5001msec); 0 zone resets 00:12:36.506 slat (usec): min=4, max=1958, avg=27.08, stdev=99.00 00:12:36.506 clat (usec): min=49, max=5355, avg=1404.39, stdev=600.72 00:12:36.506 lat (usec): min=198, max=5532, avg=1431.47, stdev=592.95 00:12:36.506 clat percentiles (usec): 00:12:36.506 | 1.00th=[ 277], 5.00th=[ 529], 10.00th=[ 693], 20.00th=[ 898], 00:12:36.506 | 30.00th=[ 1074], 40.00th=[ 1221], 50.00th=[ 1369], 60.00th=[ 1516], 00:12:36.506 | 70.00th=[ 1663], 80.00th=[ 1827], 90.00th=[ 2114], 95.00th=[ 2474], 00:12:36.506 | 99.00th=[ 3261], 99.50th=[ 3589], 99.90th=[ 4293], 99.95th=[ 4490], 00:12:36.506 | 99.99th=[ 4883] 00:12:36.506 bw ( KiB/s): min=111280, max=129968, per=100.00%, avg=119920.89, stdev=6097.73, samples=9 00:12:36.506 iops : min=27820, max=32492, avg=29980.22, stdev=1524.43, samples=9 00:12:36.506 lat (usec) : 50=0.01%, 250=0.70%, 500=3.70%, 750=8.15%, 1000=13.41% 00:12:36.506 lat (msec) : 2=60.77%, 4=13.06%, 10=0.21% 00:12:36.506 cpu : usr=32.12%, sys=57.00%, ctx=28, majf=0, minf=1066 00:12:36.506 IO depths : 1=0.3%, 2=0.9%, 4=2.7%, 8=8.3%, 16=24.0%, 32=61.8%, >=64=2.0% 00:12:36.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.506 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:36.506 issued rwts: total=0,149375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.506 00:12:36.506 Run status group 0 (all jobs): 00:12:36.506 WRITE: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=583MiB (612MB), run=5001-5001msec 00:12:36.506 ----------------------------------------------------- 00:12:36.506 Suppressions used: 00:12:36.506 count bytes template 00:12:36.506 1 11 /usr/src/fio/parse.c 00:12:36.506 1 8 libtcmalloc_minimal.so 00:12:36.506 1 904 libcrypto.so 00:12:36.506 ----------------------------------------------------- 00:12:36.506 00:12:36.506 00:12:36.506 real 0m12.104s 00:12:36.506 user 0m4.891s 00:12:36.506 sys 0m5.912s 00:12:36.506 ************************************ 00:12:36.506 END TEST xnvme_fio_plugin 00:12:36.506 ************************************ 00:12:36.506 21:18:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.506 21:18:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:36.506 21:18:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:36.506 21:18:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:36.506 21:18:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:36.506 21:18:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:36.506 21:18:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:36.506 21:18:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.506 21:18:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.506 ************************************ 00:12:36.506 START TEST xnvme_rpc 00:12:36.506 ************************************ 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82676 00:12:36.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82676 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82676 ']' 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.506 21:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.506 [2024-12-16 21:18:25.602508] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:36.506 [2024-12-16 21:18:25.602679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82676 ] 00:12:36.506 [2024-12-16 21:18:25.743202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.506 [2024-12-16 21:18:25.772262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.768 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.768 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:36.768 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:36.768 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.768 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.029 xnvme_bdev 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.029 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82676 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82676 ']' 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82676 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82676 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.030 killing process with pid 82676 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82676' 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82676 00:12:37.030 21:18:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82676 00:12:37.603 00:12:37.603 real 0m1.488s 00:12:37.603 user 0m1.551s 00:12:37.603 sys 0m0.423s 00:12:37.603 21:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.603 ************************************ 00:12:37.603 END TEST xnvme_rpc 00:12:37.603 ************************************ 00:12:37.603 21:18:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.603 21:18:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:37.603 21:18:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.603 21:18:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.603 21:18:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.603 ************************************ 00:12:37.603 START TEST xnvme_bdevperf 00:12:37.603 ************************************ 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.603 21:18:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.603 { 00:12:37.603 "subsystems": [ 00:12:37.603 { 00:12:37.603 "subsystem": "bdev", 00:12:37.603 "config": [ 00:12:37.603 { 00:12:37.603 "params": { 00:12:37.603 "io_mechanism": "libaio", 00:12:37.603 "conserve_cpu": true, 00:12:37.603 "filename": "/dev/nvme0n1", 00:12:37.603 "name": "xnvme_bdev" 00:12:37.603 }, 00:12:37.603 "method": "bdev_xnvme_create" 00:12:37.603 }, 00:12:37.603 { 00:12:37.603 "method": "bdev_wait_for_examine" 00:12:37.603 } 00:12:37.603 ] 00:12:37.603 } 00:12:37.603 ] 00:12:37.603 } 00:12:37.603 [2024-12-16 21:18:27.136226] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:37.603 [2024-12-16 21:18:27.136367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82738 ] 00:12:37.603 [2024-12-16 21:18:27.282775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.865 [2024-12-16 21:18:27.311617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.865 Running I/O for 5 seconds... 00:12:39.820 27473.00 IOPS, 107.32 MiB/s [2024-12-16T21:18:30.466Z] 26530.00 IOPS, 103.63 MiB/s [2024-12-16T21:18:31.855Z] 26609.00 IOPS, 103.94 MiB/s [2024-12-16T21:18:32.802Z] 27029.25 IOPS, 105.58 MiB/s [2024-12-16T21:18:32.802Z] 26826.60 IOPS, 104.79 MiB/s 00:12:43.102 Latency(us) 00:12:43.102 [2024-12-16T21:18:32.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.102 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.102 xnvme_bdev : 5.01 26807.36 104.72 0.00 0.00 2382.31 497.82 7057.72 00:12:43.102 [2024-12-16T21:18:32.802Z] =================================================================================================================== 00:12:43.102 [2024-12-16T21:18:32.802Z] Total : 26807.36 104.72 0.00 0.00 2382.31 497.82 7057.72 00:12:43.102 21:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:43.102 21:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:43.102 21:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:43.102 21:18:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:43.102 21:18:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.102 { 00:12:43.102 "subsystems": [ 00:12:43.102 { 00:12:43.102 "subsystem": "bdev", 00:12:43.102 "config": [ 00:12:43.102 { 00:12:43.102 "params": { 00:12:43.102 "io_mechanism": "libaio", 00:12:43.102 "conserve_cpu": true, 00:12:43.102 "filename": "/dev/nvme0n1", 00:12:43.102 "name": "xnvme_bdev" 00:12:43.102 }, 00:12:43.102 "method": "bdev_xnvme_create" 00:12:43.102 }, 00:12:43.102 { 00:12:43.102 "method": "bdev_wait_for_examine" 00:12:43.102 } 00:12:43.102 ] 00:12:43.102 } 00:12:43.102 ] 00:12:43.102 } 00:12:43.102 [2024-12-16 21:18:32.723077] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:43.102 [2024-12-16 21:18:32.723211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82803 ] 00:12:43.364 [2024-12-16 21:18:32.869177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.364 [2024-12-16 21:18:32.897867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.364 Running I/O for 5 seconds... 00:12:45.325 29338.00 IOPS, 114.60 MiB/s [2024-12-16T21:18:36.411Z] 28358.00 IOPS, 110.77 MiB/s [2024-12-16T21:18:37.354Z] 27786.33 IOPS, 108.54 MiB/s [2024-12-16T21:18:38.297Z] 27251.75 IOPS, 106.45 MiB/s 00:12:48.597 Latency(us) 00:12:48.597 [2024-12-16T21:18:38.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.597 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:48.597 xnvme_bdev : 5.00 27222.45 106.34 0.00 0.00 2345.91 226.86 7561.85 00:12:48.597 [2024-12-16T21:18:38.297Z] =================================================================================================================== 00:12:48.597 [2024-12-16T21:18:38.297Z] Total : 27222.45 106.34 0.00 0.00 2345.91 226.86 7561.85 00:12:48.597 ************************************ 00:12:48.597 END TEST xnvme_bdevperf 00:12:48.597 ************************************ 00:12:48.597 00:12:48.597 real 0m11.163s 00:12:48.597 user 0m3.056s 00:12:48.597 sys 0m6.807s 00:12:48.597 21:18:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.597 21:18:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.597 21:18:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:48.597 21:18:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:48.597 21:18:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.597 21:18:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.859 ************************************ 00:12:48.859 START TEST xnvme_fio_plugin 00:12:48.859 ************************************ 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.859 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.860 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:48.860 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:48.860 21:18:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.860 { 00:12:48.860 "subsystems": [ 00:12:48.860 { 00:12:48.860 "subsystem": "bdev", 00:12:48.860 "config": [ 00:12:48.860 { 00:12:48.860 "params": { 00:12:48.860 "io_mechanism": "libaio", 00:12:48.860 "conserve_cpu": true, 00:12:48.860 "filename": "/dev/nvme0n1", 00:12:48.860 "name": "xnvme_bdev" 00:12:48.860 }, 00:12:48.860 "method": "bdev_xnvme_create" 00:12:48.860 }, 00:12:48.860 { 00:12:48.860 "method": "bdev_wait_for_examine" 00:12:48.860 } 00:12:48.860 ] 00:12:48.860 } 00:12:48.860 ] 00:12:48.860 } 00:12:48.860 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:48.860 fio-3.35 00:12:48.860 Starting 1 thread 00:12:55.450 00:12:55.450 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82906: Mon Dec 16 21:18:43 2024 00:12:55.450 read: IOPS=29.2k, BW=114MiB/s (119MB/s)(569MiB/5001msec) 00:12:55.450 slat (usec): min=4, max=5671, avg=25.09, stdev=109.96 00:12:55.450 clat (usec): min=107, max=6261, avg=1512.57, stdev=559.38 00:12:55.450 lat (usec): min=200, max=6308, avg=1537.66, stdev=547.25 00:12:55.450 clat percentiles (usec): 00:12:55.450 | 1.00th=[ 306], 5.00th=[ 644], 10.00th=[ 824], 20.00th=[ 1074], 00:12:55.450 | 30.00th=[ 1237], 40.00th=[ 1369], 50.00th=[ 1500], 60.00th=[ 1614], 00:12:55.450 | 70.00th=[ 1745], 80.00th=[ 1926], 90.00th=[ 2180], 95.00th=[ 2442], 00:12:55.450 | 99.00th=[ 3163], 99.50th=[ 3458], 99.90th=[ 4080], 99.95th=[ 4293], 00:12:55.451 | 99.99th=[ 6194] 00:12:55.451 bw ( KiB/s): min=109392, max=126704, per=100.00%, avg=116640.78, stdev=5276.67, samples=9 00:12:55.451 iops : min=27348, max=31676, avg=29160.11, stdev=1319.18, samples=9 00:12:55.451 lat (usec) : 250=0.50%, 500=2.14%, 750=5.21%, 1000=8.79% 00:12:55.451 lat (msec) : 2=67.41%, 4=15.82%, 10=0.13% 00:12:55.451 cpu : usr=38.22%, sys=53.86%, ctx=25, majf=0, minf=1065 00:12:55.451 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.2%, 16=23.1%, 32=61.8%, >=64=2.1% 00:12:55.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.451 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:55.451 issued rwts: total=145789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:55.451 00:12:55.451 Run status group 0 (all jobs): 00:12:55.451 READ: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=569MiB (597MB), run=5001-5001msec 00:12:55.451 ----------------------------------------------------- 00:12:55.451 Suppressions used: 00:12:55.451 count bytes template 00:12:55.451 1 11 /usr/src/fio/parse.c 00:12:55.451 1 8 libtcmalloc_minimal.so 00:12:55.451 1 904 libcrypto.so 00:12:55.451 ----------------------------------------------------- 00:12:55.451 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:55.451 21:18:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.451 { 00:12:55.451 "subsystems": [ 00:12:55.451 { 00:12:55.451 "subsystem": "bdev", 00:12:55.451 "config": [ 00:12:55.451 { 00:12:55.451 "params": { 00:12:55.451 "io_mechanism": "libaio", 00:12:55.451 "conserve_cpu": true, 00:12:55.451 "filename": "/dev/nvme0n1", 00:12:55.451 "name": "xnvme_bdev" 00:12:55.451 }, 00:12:55.451 "method": "bdev_xnvme_create" 00:12:55.451 }, 00:12:55.451 { 00:12:55.451 "method": "bdev_wait_for_examine" 00:12:55.451 } 00:12:55.451 ] 00:12:55.451 } 00:12:55.451 ] 00:12:55.451 } 00:12:55.451 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:55.451 fio-3.35 00:12:55.451 Starting 1 thread 00:13:00.744 00:13:00.744 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82992: Mon Dec 16 21:18:50 2024 00:13:00.744 write: IOPS=30.6k, BW=120MiB/s (125MB/s)(598MiB/5001msec); 0 zone resets 00:13:00.744 slat (usec): min=4, max=1801, avg=24.81, stdev=100.44 00:13:00.744 clat (usec): min=108, max=6990, avg=1415.23, stdev=561.05 00:13:00.744 lat (usec): min=210, max=6996, avg=1440.04, stdev=551.72 00:13:00.744 clat percentiles (usec): 00:13:00.744 | 1.00th=[ 293], 5.00th=[ 562], 10.00th=[ 725], 20.00th=[ 955], 00:13:00.744 | 30.00th=[ 1123], 40.00th=[ 1254], 50.00th=[ 1385], 60.00th=[ 1500], 00:13:00.744 | 70.00th=[ 1647], 80.00th=[ 1844], 90.00th=[ 2114], 95.00th=[ 2409], 00:13:00.744 | 99.00th=[ 3064], 99.50th=[ 3326], 99.90th=[ 3982], 99.95th=[ 4228], 00:13:00.744 | 99.99th=[ 4621] 00:13:00.744 bw ( KiB/s): min=114936, max=130640, per=99.38%, avg=121648.00, stdev=5427.60, samples=9 00:13:00.744 iops : min=28734, max=32660, avg=30412.00, stdev=1356.90, samples=9 00:13:00.744 lat (usec) : 250=0.59%, 500=3.17%, 750=7.01%, 1000=11.62% 00:13:00.744 lat (msec) : 2=64.41%, 4=13.10%, 10=0.10% 00:13:00.744 cpu : usr=36.26%, sys=54.68%, ctx=59, majf=0, minf=1066 00:13:00.744 IO depths : 1=0.4%, 2=1.1%, 4=3.0%, 8=8.4%, 16=23.4%, 32=61.6%, >=64=2.1% 00:13:00.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.744 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:00.744 issued rwts: total=0,153034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.744 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:00.744 00:13:00.744 Run status group 0 (all jobs): 00:13:00.744 WRITE: bw=120MiB/s (125MB/s), 120MiB/s-120MiB/s (125MB/s-125MB/s), io=598MiB (627MB), run=5001-5001msec 00:13:01.005 ----------------------------------------------------- 00:13:01.005 Suppressions used: 00:13:01.005 count bytes template 00:13:01.005 1 11 /usr/src/fio/parse.c 00:13:01.005 1 8 libtcmalloc_minimal.so 00:13:01.005 1 904 libcrypto.so 00:13:01.005 ----------------------------------------------------- 00:13:01.005 00:13:01.005 00:13:01.005 real 0m12.213s 00:13:01.005 user 0m4.959s 00:13:01.005 sys 0m6.015s 00:13:01.005 21:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.005 ************************************ 00:13:01.005 END TEST xnvme_fio_plugin 00:13:01.005 ************************************ 00:13:01.005 21:18:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:01.005 21:18:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:01.005 21:18:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.005 21:18:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.005 21:18:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 ************************************ 00:13:01.005 START TEST xnvme_rpc 00:13:01.005 ************************************ 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83073 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83073 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83073 ']' 00:13:01.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.005 21:18:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 [2024-12-16 21:18:50.684377] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:01.005 [2024-12-16 21:18:50.684553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83073 ] 00:13:01.267 [2024-12-16 21:18:50.833003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.267 [2024-12-16 21:18:50.875070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.841 xnvme_bdev 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:01.841 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83073 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83073 ']' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83073 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83073 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.102 killing process with pid 83073 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83073' 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83073 00:13:02.102 21:18:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83073 00:13:02.675 00:13:02.675 real 0m1.631s 00:13:02.675 user 0m1.604s 00:13:02.675 sys 0m0.497s 00:13:02.675 21:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.675 21:18:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.675 ************************************ 00:13:02.675 END TEST xnvme_rpc 00:13:02.675 ************************************ 00:13:02.675 21:18:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:02.675 21:18:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.675 21:18:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.675 21:18:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.675 ************************************ 00:13:02.675 START TEST xnvme_bdevperf 00:13:02.675 ************************************ 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:02.675 21:18:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.675 { 00:13:02.675 "subsystems": [ 00:13:02.675 { 00:13:02.675 "subsystem": "bdev", 00:13:02.675 "config": [ 00:13:02.675 { 00:13:02.675 "params": { 00:13:02.675 "io_mechanism": "io_uring", 00:13:02.675 "conserve_cpu": false, 00:13:02.675 "filename": "/dev/nvme0n1", 00:13:02.675 "name": "xnvme_bdev" 00:13:02.675 }, 00:13:02.675 "method": "bdev_xnvme_create" 00:13:02.675 }, 00:13:02.675 { 00:13:02.675 "method": "bdev_wait_for_examine" 00:13:02.675 } 00:13:02.675 ] 00:13:02.675 } 00:13:02.675 ] 00:13:02.675 } 00:13:02.675 [2024-12-16 21:18:52.367779] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:02.675 [2024-12-16 21:18:52.367913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83130 ] 00:13:02.937 [2024-12-16 21:18:52.518129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.937 [2024-12-16 21:18:52.557982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.199 Running I/O for 5 seconds... 00:13:05.089 32128.00 IOPS, 125.50 MiB/s [2024-12-16T21:18:55.731Z] 32160.00 IOPS, 125.62 MiB/s [2024-12-16T21:18:57.200Z] 31722.67 IOPS, 123.92 MiB/s [2024-12-16T21:18:57.774Z] 31424.00 IOPS, 122.75 MiB/s 00:13:08.075 Latency(us) 00:13:08.075 [2024-12-16T21:18:57.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.075 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:08.075 xnvme_bdev : 5.00 31457.66 122.88 0.00 0.00 2031.03 1052.36 4058.19 00:13:08.075 [2024-12-16T21:18:57.775Z] =================================================================================================================== 00:13:08.075 [2024-12-16T21:18:57.775Z] Total : 31457.66 122.88 0.00 0.00 2031.03 1052.36 4058.19 00:13:08.337 21:18:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:08.337 21:18:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:08.337 21:18:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:08.337 21:18:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:08.337 21:18:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:08.337 { 00:13:08.337 "subsystems": [ 00:13:08.337 { 00:13:08.337 "subsystem": "bdev", 00:13:08.337 "config": [ 00:13:08.337 { 00:13:08.337 "params": { 00:13:08.337 "io_mechanism": "io_uring", 00:13:08.337 "conserve_cpu": false, 00:13:08.337 "filename": "/dev/nvme0n1", 00:13:08.337 "name": "xnvme_bdev" 00:13:08.337 }, 00:13:08.337 "method": "bdev_xnvme_create" 00:13:08.337 }, 00:13:08.337 { 00:13:08.337 "method": "bdev_wait_for_examine" 00:13:08.337 } 00:13:08.337 ] 00:13:08.337 } 00:13:08.337 ] 00:13:08.337 } 00:13:08.337 [2024-12-16 21:18:57.961839] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:08.337 [2024-12-16 21:18:57.961978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83200 ] 00:13:08.598 [2024-12-16 21:18:58.102437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.598 [2024-12-16 21:18:58.131610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.598 Running I/O for 5 seconds... 00:13:10.932 34086.00 IOPS, 133.15 MiB/s [2024-12-16T21:19:01.577Z] 33518.50 IOPS, 130.93 MiB/s [2024-12-16T21:19:02.522Z] 33597.00 IOPS, 131.24 MiB/s [2024-12-16T21:19:03.469Z] 33652.50 IOPS, 131.46 MiB/s 00:13:13.769 Latency(us) 00:13:13.769 [2024-12-16T21:19:03.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.769 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:13.769 xnvme_bdev : 5.00 33626.23 131.35 0.00 0.00 1899.65 453.71 4990.82 00:13:13.769 [2024-12-16T21:19:03.469Z] =================================================================================================================== 00:13:13.769 [2024-12-16T21:19:03.469Z] Total : 33626.23 131.35 0.00 0.00 1899.65 453.71 4990.82 00:13:13.769 00:13:13.769 real 0m11.121s 00:13:13.769 user 0m4.747s 00:13:13.769 sys 0m6.141s 00:13:13.769 21:19:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.769 ************************************ 00:13:13.769 END TEST xnvme_bdevperf 00:13:13.769 21:19:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.769 ************************************ 00:13:14.031 21:19:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:14.031 21:19:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.031 21:19:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.031 21:19:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.031 ************************************ 00:13:14.031 START TEST xnvme_fio_plugin 00:13:14.031 ************************************ 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.031 21:19:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.031 { 00:13:14.031 "subsystems": [ 00:13:14.031 { 00:13:14.031 "subsystem": "bdev", 00:13:14.031 "config": [ 00:13:14.031 { 00:13:14.031 "params": { 00:13:14.031 "io_mechanism": "io_uring", 00:13:14.031 "conserve_cpu": false, 00:13:14.031 "filename": "/dev/nvme0n1", 00:13:14.031 "name": "xnvme_bdev" 00:13:14.031 }, 00:13:14.031 "method": "bdev_xnvme_create" 00:13:14.031 }, 00:13:14.031 { 00:13:14.031 "method": "bdev_wait_for_examine" 00:13:14.031 } 00:13:14.031 ] 00:13:14.031 } 00:13:14.031 ] 00:13:14.031 } 00:13:14.031 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.031 fio-3.35 00:13:14.031 Starting 1 thread 00:13:20.627 00:13:20.627 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83303: Mon Dec 16 21:19:09 2024 00:13:20.627 read: IOPS=31.6k, BW=124MiB/s (130MB/s)(618MiB/5002msec) 00:13:20.627 slat (usec): min=2, max=248, avg= 3.35, stdev= 2.10 00:13:20.627 clat (usec): min=992, max=6226, avg=1888.53, stdev=318.32 00:13:20.627 lat (usec): min=995, max=6229, avg=1891.89, stdev=318.54 00:13:20.627 clat percentiles (usec): 00:13:20.627 | 1.00th=[ 1287], 5.00th=[ 1418], 10.00th=[ 1500], 20.00th=[ 1614], 00:13:20.627 | 30.00th=[ 1713], 40.00th=[ 1795], 50.00th=[ 1876], 60.00th=[ 1958], 00:13:20.627 | 70.00th=[ 2040], 80.00th=[ 2114], 90.00th=[ 2278], 95.00th=[ 2409], 00:13:20.627 | 99.00th=[ 2737], 99.50th=[ 2900], 99.90th=[ 3261], 99.95th=[ 3490], 00:13:20.627 | 99.99th=[ 6194] 00:13:20.627 bw ( KiB/s): min=124416, max=130048, per=99.96%, avg=126464.00, stdev=1863.71, samples=9 00:13:20.627 iops : min=31104, max=32512, avg=31616.00, stdev=465.93, samples=9 00:13:20.627 lat (usec) : 1000=0.01% 00:13:20.627 lat (msec) : 2=65.19%, 4=34.77%, 10=0.04% 00:13:20.627 cpu : usr=31.83%, sys=66.57%, ctx=13, majf=0, minf=1063 00:13:20.627 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:20.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.627 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:20.627 issued rwts: total=158208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.627 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:20.627 00:13:20.627 Run status group 0 (all jobs): 00:13:20.627 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=618MiB (648MB), run=5002-5002msec 00:13:20.627 ----------------------------------------------------- 00:13:20.627 Suppressions used: 00:13:20.627 count bytes template 00:13:20.627 1 11 /usr/src/fio/parse.c 00:13:20.627 1 8 libtcmalloc_minimal.so 00:13:20.627 1 904 libcrypto.so 00:13:20.627 ----------------------------------------------------- 00:13:20.627 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:20.627 21:19:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.627 { 00:13:20.627 "subsystems": [ 00:13:20.627 { 00:13:20.627 "subsystem": "bdev", 00:13:20.627 "config": [ 00:13:20.627 { 00:13:20.627 "params": { 00:13:20.627 "io_mechanism": "io_uring", 00:13:20.627 "conserve_cpu": false, 00:13:20.627 "filename": "/dev/nvme0n1", 00:13:20.627 "name": "xnvme_bdev" 00:13:20.627 }, 00:13:20.627 "method": "bdev_xnvme_create" 00:13:20.627 }, 00:13:20.627 { 00:13:20.627 "method": "bdev_wait_for_examine" 00:13:20.627 } 00:13:20.627 ] 00:13:20.627 } 00:13:20.627 ] 00:13:20.627 } 00:13:20.627 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:20.627 fio-3.35 00:13:20.627 Starting 1 thread 00:13:25.922 00:13:25.922 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83383: Mon Dec 16 21:19:15 2024 00:13:25.922 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(658MiB/5002msec); 0 zone resets 00:13:25.922 slat (usec): min=2, max=825, avg= 3.52, stdev= 2.75 00:13:25.922 clat (usec): min=636, max=7354, avg=1760.89, stdev=280.28 00:13:25.922 lat (usec): min=639, max=7358, avg=1764.42, stdev=280.50 00:13:25.922 clat percentiles (usec): 00:13:25.922 | 1.00th=[ 1237], 5.00th=[ 1369], 10.00th=[ 1434], 20.00th=[ 1532], 00:13:25.922 | 30.00th=[ 1598], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1811], 00:13:25.922 | 70.00th=[ 1876], 80.00th=[ 1975], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:25.922 | 99.00th=[ 2540], 99.50th=[ 2671], 99.90th=[ 3228], 99.95th=[ 3752], 00:13:25.922 | 99.99th=[ 4047] 00:13:25.922 bw ( KiB/s): min=131056, max=141664, per=100.00%, avg=135205.33, stdev=3423.35, samples=9 00:13:25.922 iops : min=32764, max=35416, avg=33801.33, stdev=855.84, samples=9 00:13:25.922 lat (usec) : 750=0.01%, 1000=0.05% 00:13:25.922 lat (msec) : 2=81.95%, 4=17.98%, 10=0.02% 00:13:25.922 cpu : usr=32.77%, sys=65.75%, ctx=34, majf=0, minf=1064 00:13:25.922 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:25.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.922 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:25.922 issued rwts: total=0,168528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:25.922 00:13:25.922 Run status group 0 (all jobs): 00:13:25.922 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=658MiB (690MB), run=5002-5002msec 00:13:25.922 ----------------------------------------------------- 00:13:25.922 Suppressions used: 00:13:25.922 count bytes template 00:13:25.922 1 11 /usr/src/fio/parse.c 00:13:25.922 1 8 libtcmalloc_minimal.so 00:13:25.922 1 904 libcrypto.so 00:13:25.922 ----------------------------------------------------- 00:13:25.922 00:13:25.922 00:13:25.922 real 0m12.024s 00:13:25.922 user 0m4.397s 00:13:25.922 sys 0m7.157s 00:13:25.922 21:19:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.922 ************************************ 00:13:25.922 END TEST xnvme_fio_plugin 00:13:25.922 ************************************ 00:13:25.922 21:19:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:25.922 21:19:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:25.922 21:19:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:25.922 21:19:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:25.922 21:19:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:25.922 21:19:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.922 21:19:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.922 21:19:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.922 ************************************ 00:13:25.922 START TEST xnvme_rpc 00:13:25.922 ************************************ 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83464 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83464 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83464 ']' 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.922 21:19:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.183 [2024-12-16 21:19:15.688740] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:26.183 [2024-12-16 21:19:15.688902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83464 ] 00:13:26.183 [2024-12-16 21:19:15.837903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.183 [2024-12-16 21:19:15.879145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.129 xnvme_bdev 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83464 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83464 ']' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83464 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83464 00:13:27.129 killing process with pid 83464 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83464' 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83464 00:13:27.129 21:19:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83464 00:13:27.391 00:13:27.391 real 0m1.424s 00:13:27.391 user 0m1.375s 00:13:27.391 sys 0m0.530s 00:13:27.391 21:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.391 21:19:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.391 ************************************ 00:13:27.391 END TEST xnvme_rpc 00:13:27.391 ************************************ 00:13:27.391 21:19:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:27.391 21:19:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.391 21:19:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.391 21:19:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.391 ************************************ 00:13:27.391 START TEST xnvme_bdevperf 00:13:27.391 ************************************ 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:27.391 21:19:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.654 { 00:13:27.654 "subsystems": [ 00:13:27.654 { 00:13:27.654 "subsystem": "bdev", 00:13:27.654 "config": [ 00:13:27.654 { 00:13:27.654 "params": { 00:13:27.654 "io_mechanism": "io_uring", 00:13:27.654 "conserve_cpu": true, 00:13:27.654 "filename": "/dev/nvme0n1", 00:13:27.654 "name": "xnvme_bdev" 00:13:27.654 }, 00:13:27.654 "method": "bdev_xnvme_create" 00:13:27.654 }, 00:13:27.654 { 00:13:27.654 "method": "bdev_wait_for_examine" 00:13:27.654 } 00:13:27.654 ] 00:13:27.654 } 00:13:27.654 ] 00:13:27.654 } 00:13:27.654 [2024-12-16 21:19:17.150157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:27.654 [2024-12-16 21:19:17.150291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83516 ] 00:13:27.654 [2024-12-16 21:19:17.297787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.654 [2024-12-16 21:19:17.327197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.916 Running I/O for 5 seconds... 00:13:29.805 31616.00 IOPS, 123.50 MiB/s [2024-12-16T21:19:20.447Z] 31872.00 IOPS, 124.50 MiB/s [2024-12-16T21:19:21.834Z] 32298.67 IOPS, 126.17 MiB/s [2024-12-16T21:19:22.777Z] 32224.00 IOPS, 125.88 MiB/s [2024-12-16T21:19:22.777Z] 32217.60 IOPS, 125.85 MiB/s 00:13:33.077 Latency(us) 00:13:33.077 [2024-12-16T21:19:22.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.077 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:33.077 xnvme_bdev : 5.01 32195.29 125.76 0.00 0.00 1984.26 1096.47 4436.28 00:13:33.077 [2024-12-16T21:19:22.777Z] =================================================================================================================== 00:13:33.077 [2024-12-16T21:19:22.777Z] Total : 32195.29 125.76 0.00 0.00 1984.26 1096.47 4436.28 00:13:33.077 21:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.077 21:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:33.077 21:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:33.077 21:19:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:33.077 21:19:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:33.077 { 00:13:33.077 "subsystems": [ 00:13:33.077 { 00:13:33.077 "subsystem": "bdev", 00:13:33.077 "config": [ 00:13:33.077 { 00:13:33.077 "params": { 00:13:33.077 "io_mechanism": "io_uring", 00:13:33.077 "conserve_cpu": true, 00:13:33.077 "filename": "/dev/nvme0n1", 00:13:33.077 "name": "xnvme_bdev" 00:13:33.077 }, 00:13:33.077 "method": "bdev_xnvme_create" 00:13:33.077 }, 00:13:33.077 { 00:13:33.077 "method": "bdev_wait_for_examine" 00:13:33.077 } 00:13:33.078 ] 00:13:33.078 } 00:13:33.078 ] 00:13:33.078 } 00:13:33.078 [2024-12-16 21:19:22.683401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:33.078 [2024-12-16 21:19:22.683540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83586 ] 00:13:33.338 [2024-12-16 21:19:22.829369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.338 [2024-12-16 21:19:22.858391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.338 Running I/O for 5 seconds... 00:13:35.758 33786.00 IOPS, 131.98 MiB/s [2024-12-16T21:19:26.030Z] 33530.00 IOPS, 130.98 MiB/s [2024-12-16T21:19:26.971Z] 33675.00 IOPS, 131.54 MiB/s [2024-12-16T21:19:28.356Z] 33863.25 IOPS, 132.28 MiB/s [2024-12-16T21:19:28.356Z] 33794.00 IOPS, 132.01 MiB/s 00:13:38.656 Latency(us) 00:13:38.656 [2024-12-16T21:19:28.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.656 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:38.656 xnvme_bdev : 5.00 33792.59 132.00 0.00 0.00 1890.18 995.64 4133.81 00:13:38.656 [2024-12-16T21:19:28.356Z] =================================================================================================================== 00:13:38.656 [2024-12-16T21:19:28.356Z] Total : 33792.59 132.00 0.00 0.00 1890.18 995.64 4133.81 00:13:38.656 00:13:38.656 real 0m11.145s 00:13:38.656 user 0m8.072s 00:13:38.656 sys 0m2.602s 00:13:38.656 ************************************ 00:13:38.656 END TEST xnvme_bdevperf 00:13:38.656 ************************************ 00:13:38.656 21:19:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.657 21:19:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:38.657 21:19:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:38.657 21:19:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.657 21:19:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.657 21:19:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.657 ************************************ 00:13:38.657 START TEST xnvme_fio_plugin 00:13:38.657 ************************************ 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.657 21:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.657 { 00:13:38.657 "subsystems": [ 00:13:38.657 { 00:13:38.657 "subsystem": "bdev", 00:13:38.657 "config": [ 00:13:38.657 { 00:13:38.657 "params": { 00:13:38.657 "io_mechanism": "io_uring", 00:13:38.657 "conserve_cpu": true, 00:13:38.657 "filename": "/dev/nvme0n1", 00:13:38.657 "name": "xnvme_bdev" 00:13:38.657 }, 00:13:38.657 "method": "bdev_xnvme_create" 00:13:38.657 }, 00:13:38.657 { 00:13:38.657 "method": "bdev_wait_for_examine" 00:13:38.657 } 00:13:38.657 ] 00:13:38.657 } 00:13:38.657 ] 00:13:38.657 } 00:13:38.918 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:38.918 fio-3.35 00:13:38.918 Starting 1 thread 00:13:44.211 00:13:44.211 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83694: Mon Dec 16 21:19:33 2024 00:13:44.211 read: IOPS=31.6k, BW=123MiB/s (129MB/s)(617MiB/5001msec) 00:13:44.211 slat (nsec): min=2879, max=74988, avg=3448.62, stdev=1669.76 00:13:44.211 clat (usec): min=1037, max=3590, avg=1888.15, stdev=287.20 00:13:44.211 lat (usec): min=1040, max=3626, avg=1891.59, stdev=287.49 00:13:44.211 clat percentiles (usec): 00:13:44.211 | 1.00th=[ 1336], 5.00th=[ 1450], 10.00th=[ 1532], 20.00th=[ 1631], 00:13:44.211 | 30.00th=[ 1729], 40.00th=[ 1795], 50.00th=[ 1876], 60.00th=[ 1942], 00:13:44.211 | 70.00th=[ 2024], 80.00th=[ 2114], 90.00th=[ 2245], 95.00th=[ 2376], 00:13:44.211 | 99.00th=[ 2671], 99.50th=[ 2802], 99.90th=[ 3064], 99.95th=[ 3130], 00:13:44.211 | 99.99th=[ 3392] 00:13:44.211 bw ( KiB/s): min=123904, max=129536, per=100.00%, avg=126691.56, stdev=2203.85, samples=9 00:13:44.211 iops : min=30976, max=32384, avg=31672.89, stdev=550.96, samples=9 00:13:44.211 lat (msec) : 2=66.63%, 4=33.37% 00:13:44.211 cpu : usr=68.54%, sys=28.24%, ctx=15, majf=0, minf=1063 00:13:44.211 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:44.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.211 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:44.211 issued rwts: total=157888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:44.211 00:13:44.211 Run status group 0 (all jobs): 00:13:44.211 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=617MiB (647MB), run=5001-5001msec 00:13:44.783 ----------------------------------------------------- 00:13:44.784 Suppressions used: 00:13:44.784 count bytes template 00:13:44.784 1 11 /usr/src/fio/parse.c 00:13:44.784 1 8 libtcmalloc_minimal.so 00:13:44.784 1 904 libcrypto.so 00:13:44.784 ----------------------------------------------------- 00:13:44.784 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:44.784 21:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.784 { 00:13:44.784 "subsystems": [ 00:13:44.784 { 00:13:44.784 "subsystem": "bdev", 00:13:44.784 "config": [ 00:13:44.784 { 00:13:44.784 "params": { 00:13:44.784 "io_mechanism": "io_uring", 00:13:44.784 "conserve_cpu": true, 00:13:44.784 "filename": "/dev/nvme0n1", 00:13:44.784 "name": "xnvme_bdev" 00:13:44.784 }, 00:13:44.784 "method": "bdev_xnvme_create" 00:13:44.784 }, 00:13:44.784 { 00:13:44.784 "method": "bdev_wait_for_examine" 00:13:44.784 } 00:13:44.784 ] 00:13:44.784 } 00:13:44.784 ] 00:13:44.784 } 00:13:45.045 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:45.045 fio-3.35 00:13:45.045 Starting 1 thread 00:13:50.336 00:13:50.336 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83780: Mon Dec 16 21:19:39 2024 00:13:50.336 write: IOPS=33.5k, BW=131MiB/s (137MB/s)(654MiB/5001msec); 0 zone resets 00:13:50.336 slat (usec): min=2, max=475, avg= 3.53, stdev= 2.22 00:13:50.336 clat (usec): min=994, max=5039, avg=1771.01, stdev=270.48 00:13:50.336 lat (usec): min=997, max=5047, avg=1774.54, stdev=270.68 00:13:50.336 clat percentiles (usec): 00:13:50.336 | 1.00th=[ 1270], 5.00th=[ 1385], 10.00th=[ 1450], 20.00th=[ 1532], 00:13:50.336 | 30.00th=[ 1614], 40.00th=[ 1680], 50.00th=[ 1745], 60.00th=[ 1811], 00:13:50.336 | 70.00th=[ 1909], 80.00th=[ 1991], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:50.336 | 99.00th=[ 2474], 99.50th=[ 2606], 99.90th=[ 2966], 99.95th=[ 3163], 00:13:50.336 | 99.99th=[ 3523] 00:13:50.336 bw ( KiB/s): min=129712, max=143360, per=100.00%, avg=134199.00, stdev=3790.29, samples=9 00:13:50.336 iops : min=32428, max=35840, avg=33549.67, stdev=947.63, samples=9 00:13:50.336 lat (usec) : 1000=0.01% 00:13:50.336 lat (msec) : 2=80.10%, 4=19.89%, 10=0.01% 00:13:50.336 cpu : usr=74.44%, sys=22.02%, ctx=17, majf=0, minf=1064 00:13:50.336 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:50.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.336 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:50.336 issued rwts: total=0,167516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.336 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.336 00:13:50.336 Run status group 0 (all jobs): 00:13:50.336 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=654MiB (686MB), run=5001-5001msec 00:13:50.597 ----------------------------------------------------- 00:13:50.597 Suppressions used: 00:13:50.597 count bytes template 00:13:50.597 1 11 /usr/src/fio/parse.c 00:13:50.597 1 8 libtcmalloc_minimal.so 00:13:50.597 1 904 libcrypto.so 00:13:50.597 ----------------------------------------------------- 00:13:50.597 00:13:50.597 ************************************ 00:13:50.597 END TEST xnvme_fio_plugin 00:13:50.597 ************************************ 00:13:50.597 00:13:50.597 real 0m11.994s 00:13:50.597 user 0m8.257s 00:13:50.597 sys 0m3.084s 00:13:50.597 21:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.597 21:19:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:50.863 21:19:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:50.863 21:19:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.863 21:19:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.863 21:19:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.863 ************************************ 00:13:50.863 START TEST xnvme_rpc 00:13:50.863 ************************************ 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:50.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83855 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83855 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83855 ']' 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.863 21:19:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:50.863 [2024-12-16 21:19:40.439093] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:50.863 [2024-12-16 21:19:40.439250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83855 ] 00:13:51.124 [2024-12-16 21:19:40.588256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.124 [2024-12-16 21:19:40.617418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.696 xnvme_bdev 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:51.696 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83855 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83855 ']' 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83855 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83855 00:13:51.958 killing process with pid 83855 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83855' 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83855 00:13:51.958 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83855 00:13:52.219 00:13:52.219 real 0m1.406s 00:13:52.219 user 0m1.506s 00:13:52.219 sys 0m0.386s 00:13:52.219 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.219 21:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.219 ************************************ 00:13:52.219 END TEST xnvme_rpc 00:13:52.219 ************************************ 00:13:52.219 21:19:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:52.219 21:19:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:52.219 21:19:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.219 21:19:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.219 ************************************ 00:13:52.219 START TEST xnvme_bdevperf 00:13:52.219 ************************************ 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:52.219 21:19:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:52.219 { 00:13:52.219 "subsystems": [ 00:13:52.219 { 00:13:52.219 "subsystem": "bdev", 00:13:52.219 "config": [ 00:13:52.219 { 00:13:52.219 "params": { 00:13:52.219 "io_mechanism": "io_uring_cmd", 00:13:52.219 "conserve_cpu": false, 00:13:52.219 "filename": "/dev/ng0n1", 00:13:52.219 "name": "xnvme_bdev" 00:13:52.219 }, 00:13:52.219 "method": "bdev_xnvme_create" 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "method": "bdev_wait_for_examine" 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 } 00:13:52.219 [2024-12-16 21:19:41.895612] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:52.219 [2024-12-16 21:19:41.895765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83907 ] 00:13:52.481 [2024-12-16 21:19:42.034712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.481 [2024-12-16 21:19:42.063331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.481 Running I/O for 5 seconds... 00:13:54.815 33920.00 IOPS, 132.50 MiB/s [2024-12-16T21:19:45.460Z] 33248.00 IOPS, 129.88 MiB/s [2024-12-16T21:19:46.404Z] 34112.00 IOPS, 133.25 MiB/s [2024-12-16T21:19:47.347Z] 33888.00 IOPS, 132.38 MiB/s [2024-12-16T21:19:47.347Z] 33638.40 IOPS, 131.40 MiB/s 00:13:57.647 Latency(us) 00:13:57.647 [2024-12-16T21:19:47.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.647 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:57.647 xnvme_bdev : 5.00 33634.73 131.39 0.00 0.00 1899.15 1209.90 4159.02 00:13:57.647 [2024-12-16T21:19:47.347Z] =================================================================================================================== 00:13:57.647 [2024-12-16T21:19:47.347Z] Total : 33634.73 131.39 0.00 0.00 1899.15 1209.90 4159.02 00:13:57.647 21:19:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.647 21:19:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:57.647 21:19:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:57.647 21:19:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:57.647 21:19:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.908 { 00:13:57.908 "subsystems": [ 00:13:57.908 { 00:13:57.908 "subsystem": "bdev", 00:13:57.908 "config": [ 00:13:57.908 { 00:13:57.908 "params": { 00:13:57.908 "io_mechanism": "io_uring_cmd", 00:13:57.908 "conserve_cpu": false, 00:13:57.908 "filename": "/dev/ng0n1", 00:13:57.908 "name": "xnvme_bdev" 00:13:57.908 }, 00:13:57.908 "method": "bdev_xnvme_create" 00:13:57.908 }, 00:13:57.908 { 00:13:57.908 "method": "bdev_wait_for_examine" 00:13:57.908 } 00:13:57.908 ] 00:13:57.908 } 00:13:57.908 ] 00:13:57.908 } 00:13:57.908 [2024-12-16 21:19:47.410333] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:57.908 [2024-12-16 21:19:47.410467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83976 ] 00:13:57.908 [2024-12-16 21:19:47.558944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.908 [2024-12-16 21:19:47.587617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.170 Running I/O for 5 seconds... 00:14:00.055 34482.00 IOPS, 134.70 MiB/s [2024-12-16T21:19:50.697Z] 34162.50 IOPS, 133.45 MiB/s [2024-12-16T21:19:52.086Z] 34990.00 IOPS, 136.68 MiB/s [2024-12-16T21:19:52.726Z] 34945.25 IOPS, 136.50 MiB/s [2024-12-16T21:19:52.726Z] 34522.20 IOPS, 134.85 MiB/s 00:14:03.026 Latency(us) 00:14:03.026 [2024-12-16T21:19:52.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.026 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:03.026 xnvme_bdev : 5.00 34519.51 134.84 0.00 0.00 1850.41 926.33 4108.60 00:14:03.026 [2024-12-16T21:19:52.726Z] =================================================================================================================== 00:14:03.026 [2024-12-16T21:19:52.726Z] Total : 34519.51 134.84 0.00 0.00 1850.41 926.33 4108.60 00:14:03.287 21:19:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:03.287 21:19:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:03.287 21:19:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:03.287 21:19:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:03.287 21:19:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:03.287 { 00:14:03.287 "subsystems": [ 00:14:03.287 { 00:14:03.287 "subsystem": "bdev", 00:14:03.287 "config": [ 00:14:03.287 { 00:14:03.287 "params": { 00:14:03.287 "io_mechanism": "io_uring_cmd", 00:14:03.287 "conserve_cpu": false, 00:14:03.287 "filename": "/dev/ng0n1", 00:14:03.287 "name": "xnvme_bdev" 00:14:03.287 }, 00:14:03.287 "method": "bdev_xnvme_create" 00:14:03.287 }, 00:14:03.287 { 00:14:03.287 "method": "bdev_wait_for_examine" 00:14:03.287 } 00:14:03.287 ] 00:14:03.287 } 00:14:03.287 ] 00:14:03.287 } 00:14:03.287 [2024-12-16 21:19:52.949783] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:03.287 [2024-12-16 21:19:52.949911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84039 ] 00:14:03.548 [2024-12-16 21:19:53.098718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.548 [2024-12-16 21:19:53.127449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.548 Running I/O for 5 seconds... 00:14:05.880 79040.00 IOPS, 308.75 MiB/s [2024-12-16T21:19:56.523Z] 79200.00 IOPS, 309.38 MiB/s [2024-12-16T21:19:57.464Z] 76842.67 IOPS, 300.17 MiB/s [2024-12-16T21:19:58.407Z] 79184.00 IOPS, 309.31 MiB/s 00:14:08.707 Latency(us) 00:14:08.707 [2024-12-16T21:19:58.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.707 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:08.707 xnvme_bdev : 5.00 82087.46 320.65 0.00 0.00 776.38 466.31 2697.06 00:14:08.707 [2024-12-16T21:19:58.407Z] =================================================================================================================== 00:14:08.707 [2024-12-16T21:19:58.407Z] Total : 82087.46 320.65 0.00 0.00 776.38 466.31 2697.06 00:14:08.707 21:19:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:08.707 21:19:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:08.707 21:19:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:08.707 21:19:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:08.707 21:19:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:08.968 { 00:14:08.968 "subsystems": [ 00:14:08.968 { 00:14:08.968 "subsystem": "bdev", 00:14:08.968 "config": [ 00:14:08.968 { 00:14:08.968 "params": { 00:14:08.968 "io_mechanism": "io_uring_cmd", 00:14:08.968 "conserve_cpu": false, 00:14:08.968 "filename": "/dev/ng0n1", 00:14:08.968 "name": "xnvme_bdev" 00:14:08.968 }, 00:14:08.968 "method": "bdev_xnvme_create" 00:14:08.968 }, 00:14:08.968 { 00:14:08.968 "method": "bdev_wait_for_examine" 00:14:08.968 } 00:14:08.968 ] 00:14:08.968 } 00:14:08.968 ] 00:14:08.968 } 00:14:08.968 [2024-12-16 21:19:58.454772] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:08.968 [2024-12-16 21:19:58.454998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84108 ] 00:14:08.968 [2024-12-16 21:19:58.598037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.968 [2024-12-16 21:19:58.621082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.229 Running I/O for 5 seconds... 00:14:11.106 46703.00 IOPS, 182.43 MiB/s [2024-12-16T21:20:01.744Z] 42933.50 IOPS, 167.71 MiB/s [2024-12-16T21:20:03.122Z] 41405.33 IOPS, 161.74 MiB/s [2024-12-16T21:20:04.063Z] 42918.00 IOPS, 167.65 MiB/s [2024-12-16T21:20:04.063Z] 43262.80 IOPS, 169.00 MiB/s 00:14:14.363 Latency(us) 00:14:14.363 [2024-12-16T21:20:04.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.363 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:14.363 xnvme_bdev : 5.00 43237.25 168.90 0.00 0.00 1476.07 199.29 21273.99 00:14:14.363 [2024-12-16T21:20:04.063Z] =================================================================================================================== 00:14:14.363 [2024-12-16T21:20:04.063Z] Total : 43237.25 168.90 0.00 0.00 1476.07 199.29 21273.99 00:14:14.363 00:14:14.363 real 0m22.135s 00:14:14.363 user 0m10.923s 00:14:14.363 sys 0m10.763s 00:14:14.363 21:20:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.363 ************************************ 00:14:14.363 END TEST xnvme_bdevperf 00:14:14.363 ************************************ 00:14:14.363 21:20:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.363 21:20:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:14.363 21:20:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.363 21:20:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.363 21:20:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.363 ************************************ 00:14:14.363 START TEST xnvme_fio_plugin 00:14:14.363 ************************************ 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:14.363 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.364 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:14.364 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:14.364 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:14.364 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:14.364 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:14.364 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:14.364 21:20:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:14.625 { 00:14:14.625 "subsystems": [ 00:14:14.625 { 00:14:14.625 "subsystem": "bdev", 00:14:14.625 "config": [ 00:14:14.625 { 00:14:14.625 "params": { 00:14:14.625 "io_mechanism": "io_uring_cmd", 00:14:14.625 "conserve_cpu": false, 00:14:14.625 "filename": "/dev/ng0n1", 00:14:14.625 "name": "xnvme_bdev" 00:14:14.625 }, 00:14:14.625 "method": "bdev_xnvme_create" 00:14:14.625 }, 00:14:14.625 { 00:14:14.625 "method": "bdev_wait_for_examine" 00:14:14.625 } 00:14:14.625 ] 00:14:14.625 } 00:14:14.625 ] 00:14:14.625 } 00:14:14.625 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:14.625 fio-3.35 00:14:14.625 Starting 1 thread 00:14:21.218 00:14:21.218 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84215: Mon Dec 16 21:20:09 2024 00:14:21.218 read: IOPS=35.8k, BW=140MiB/s (147MB/s)(699MiB/5001msec) 00:14:21.218 slat (usec): min=2, max=103, avg= 3.45, stdev= 1.76 00:14:21.218 clat (usec): min=406, max=3247, avg=1648.02, stdev=251.50 00:14:21.218 lat (usec): min=410, max=3285, avg=1651.47, stdev=251.76 00:14:21.218 clat percentiles (usec): 00:14:21.218 | 1.00th=[ 1139], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1434], 00:14:21.218 | 30.00th=[ 1516], 40.00th=[ 1565], 50.00th=[ 1631], 60.00th=[ 1696], 00:14:21.218 | 70.00th=[ 1762], 80.00th=[ 1844], 90.00th=[ 1975], 95.00th=[ 2089], 00:14:21.218 | 99.00th=[ 2343], 99.50th=[ 2442], 99.90th=[ 2737], 99.95th=[ 2900], 00:14:21.218 | 99.99th=[ 3097] 00:14:21.218 bw ( KiB/s): min=137728, max=155136, per=100.00%, avg=143985.78, stdev=7302.86, samples=9 00:14:21.218 iops : min=34432, max=38784, avg=35996.44, stdev=1825.71, samples=9 00:14:21.218 lat (usec) : 500=0.01%, 1000=0.07% 00:14:21.218 lat (msec) : 2=91.61%, 4=8.31% 00:14:21.218 cpu : usr=37.62%, sys=61.20%, ctx=9, majf=0, minf=1063 00:14:21.219 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:21.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.219 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:21.219 issued rwts: total=178956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.219 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:21.219 00:14:21.219 Run status group 0 (all jobs): 00:14:21.219 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=699MiB (733MB), run=5001-5001msec 00:14:21.219 ----------------------------------------------------- 00:14:21.219 Suppressions used: 00:14:21.219 count bytes template 00:14:21.219 1 11 /usr/src/fio/parse.c 00:14:21.219 1 8 libtcmalloc_minimal.so 00:14:21.219 1 904 libcrypto.so 00:14:21.219 ----------------------------------------------------- 00:14:21.219 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:21.219 21:20:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:21.219 { 00:14:21.219 "subsystems": [ 00:14:21.219 { 00:14:21.219 "subsystem": "bdev", 00:14:21.219 "config": [ 00:14:21.219 { 00:14:21.219 "params": { 00:14:21.219 "io_mechanism": "io_uring_cmd", 00:14:21.219 "conserve_cpu": false, 00:14:21.219 "filename": "/dev/ng0n1", 00:14:21.219 "name": "xnvme_bdev" 00:14:21.219 }, 00:14:21.219 "method": "bdev_xnvme_create" 00:14:21.219 }, 00:14:21.219 { 00:14:21.219 "method": "bdev_wait_for_examine" 00:14:21.219 } 00:14:21.219 ] 00:14:21.219 } 00:14:21.219 ] 00:14:21.219 } 00:14:21.219 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:21.219 fio-3.35 00:14:21.219 Starting 1 thread 00:14:26.513 00:14:26.513 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84295: Mon Dec 16 21:20:15 2024 00:14:26.513 write: IOPS=36.9k, BW=144MiB/s (151MB/s)(721MiB/5002msec); 0 zone resets 00:14:26.513 slat (nsec): min=2923, max=74641, avg=3945.20, stdev=2148.10 00:14:26.513 clat (usec): min=158, max=6232, avg=1588.57, stdev=357.49 00:14:26.513 lat (usec): min=172, max=6235, avg=1592.51, stdev=357.84 00:14:26.513 clat percentiles (usec): 00:14:26.513 | 1.00th=[ 775], 5.00th=[ 1029], 10.00th=[ 1172], 20.00th=[ 1336], 00:14:26.513 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1647], 00:14:26.513 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 1991], 95.00th=[ 2147], 00:14:26.513 | 99.00th=[ 2606], 99.50th=[ 2900], 99.90th=[ 3752], 99.95th=[ 4293], 00:14:26.513 | 99.99th=[ 4817] 00:14:26.513 bw ( KiB/s): min=138528, max=165472, per=100.00%, avg=148862.22, stdev=9102.37, samples=9 00:14:26.513 iops : min=34632, max=41368, avg=37215.56, stdev=2275.59, samples=9 00:14:26.513 lat (usec) : 250=0.01%, 500=0.13%, 750=0.64%, 1000=3.55% 00:14:26.513 lat (msec) : 2=85.98%, 4=9.64%, 10=0.06% 00:14:26.513 cpu : usr=36.87%, sys=61.81%, ctx=9, majf=0, minf=1064 00:14:26.513 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=10.1%, 16=20.9%, 32=58.4%, >=64=2.0% 00:14:26.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.513 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.5%, 64=1.3%, >=64=0.0% 00:14:26.513 issued rwts: total=0,184645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:26.513 00:14:26.513 Run status group 0 (all jobs): 00:14:26.513 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=721MiB (756MB), run=5002-5002msec 00:14:26.774 ----------------------------------------------------- 00:14:26.774 Suppressions used: 00:14:26.774 count bytes template 00:14:26.774 1 11 /usr/src/fio/parse.c 00:14:26.774 1 8 libtcmalloc_minimal.so 00:14:26.774 1 904 libcrypto.so 00:14:26.774 ----------------------------------------------------- 00:14:26.774 00:14:26.774 ************************************ 00:14:26.774 END TEST xnvme_fio_plugin 00:14:26.774 ************************************ 00:14:26.774 00:14:26.774 real 0m12.211s 00:14:26.774 user 0m4.999s 00:14:26.774 sys 0m6.766s 00:14:26.774 21:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.774 21:20:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.774 21:20:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:26.774 21:20:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:26.774 21:20:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:26.774 21:20:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:26.774 21:20:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.774 21:20:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.775 21:20:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.775 ************************************ 00:14:26.775 START TEST xnvme_rpc 00:14:26.775 ************************************ 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:26.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84369 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84369 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84369 ']' 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.775 21:20:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.775 [2024-12-16 21:20:16.410686] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:26.775 [2024-12-16 21:20:16.410847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84369 ] 00:14:27.036 [2024-12-16 21:20:16.558544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.036 [2024-12-16 21:20:16.599604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.606 xnvme_bdev 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:27.606 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.607 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:27.607 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.607 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.607 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84369 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84369 ']' 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84369 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84369 00:14:27.868 killing process with pid 84369 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84369' 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84369 00:14:27.868 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84369 00:14:28.441 00:14:28.441 real 0m1.612s 00:14:28.441 user 0m1.579s 00:14:28.441 sys 0m0.514s 00:14:28.441 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.441 ************************************ 00:14:28.441 END TEST xnvme_rpc 00:14:28.441 ************************************ 00:14:28.441 21:20:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.441 21:20:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:28.441 21:20:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.441 21:20:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.441 21:20:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.441 ************************************ 00:14:28.441 START TEST xnvme_bdevperf 00:14:28.441 ************************************ 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:28.441 21:20:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:28.441 { 00:14:28.441 "subsystems": [ 00:14:28.441 { 00:14:28.441 "subsystem": "bdev", 00:14:28.441 "config": [ 00:14:28.441 { 00:14:28.441 "params": { 00:14:28.441 "io_mechanism": "io_uring_cmd", 00:14:28.441 "conserve_cpu": true, 00:14:28.441 "filename": "/dev/ng0n1", 00:14:28.441 "name": "xnvme_bdev" 00:14:28.441 }, 00:14:28.441 "method": "bdev_xnvme_create" 00:14:28.441 }, 00:14:28.441 { 00:14:28.441 "method": "bdev_wait_for_examine" 00:14:28.441 } 00:14:28.441 ] 00:14:28.441 } 00:14:28.441 ] 00:14:28.441 } 00:14:28.441 [2024-12-16 21:20:18.070712] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:28.441 [2024-12-16 21:20:18.071021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84432 ] 00:14:28.702 [2024-12-16 21:20:18.216805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.702 [2024-12-16 21:20:18.256074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.963 Running I/O for 5 seconds... 00:14:30.948 34048.00 IOPS, 133.00 MiB/s [2024-12-16T21:20:21.593Z] 34704.00 IOPS, 135.56 MiB/s [2024-12-16T21:20:22.537Z] 34549.33 IOPS, 134.96 MiB/s [2024-12-16T21:20:23.481Z] 35224.00 IOPS, 137.59 MiB/s [2024-12-16T21:20:23.481Z] 35116.80 IOPS, 137.18 MiB/s 00:14:33.781 Latency(us) 00:14:33.781 [2024-12-16T21:20:23.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.781 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:33.781 xnvme_bdev : 5.00 35116.69 137.17 0.00 0.00 1818.65 983.04 4940.41 00:14:33.781 [2024-12-16T21:20:23.481Z] =================================================================================================================== 00:14:33.781 [2024-12-16T21:20:23.481Z] Total : 35116.69 137.17 0.00 0.00 1818.65 983.04 4940.41 00:14:34.042 21:20:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.042 21:20:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:34.042 21:20:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:34.042 21:20:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:34.042 21:20:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.042 { 00:14:34.042 "subsystems": [ 00:14:34.042 { 00:14:34.042 "subsystem": "bdev", 00:14:34.042 "config": [ 00:14:34.042 { 00:14:34.042 "params": { 00:14:34.042 "io_mechanism": "io_uring_cmd", 00:14:34.042 "conserve_cpu": true, 00:14:34.042 "filename": "/dev/ng0n1", 00:14:34.042 "name": "xnvme_bdev" 00:14:34.042 }, 00:14:34.042 "method": "bdev_xnvme_create" 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "method": "bdev_wait_for_examine" 00:14:34.042 } 00:14:34.042 ] 00:14:34.042 } 00:14:34.042 ] 00:14:34.042 } 00:14:34.042 [2024-12-16 21:20:23.652593] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:34.042 [2024-12-16 21:20:23.652739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84495 ] 00:14:34.304 [2024-12-16 21:20:23.800278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.304 [2024-12-16 21:20:23.828961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.304 Running I/O for 5 seconds... 00:14:36.633 34869.00 IOPS, 136.21 MiB/s [2024-12-16T21:20:27.277Z] 38064.50 IOPS, 148.69 MiB/s [2024-12-16T21:20:28.220Z] 38051.00 IOPS, 148.64 MiB/s [2024-12-16T21:20:29.164Z] 38646.25 IOPS, 150.96 MiB/s 00:14:39.464 Latency(us) 00:14:39.464 [2024-12-16T21:20:29.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.464 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:39.464 xnvme_bdev : 5.00 39065.90 152.60 0.00 0.00 1634.50 204.80 10889.06 00:14:39.464 [2024-12-16T21:20:29.164Z] =================================================================================================================== 00:14:39.464 [2024-12-16T21:20:29.164Z] Total : 39065.90 152.60 0.00 0.00 1634.50 204.80 10889.06 00:14:39.725 21:20:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.725 21:20:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:39.725 21:20:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:39.725 21:20:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:39.725 21:20:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.725 { 00:14:39.725 "subsystems": [ 00:14:39.725 { 00:14:39.725 "subsystem": "bdev", 00:14:39.725 "config": [ 00:14:39.725 { 00:14:39.725 "params": { 00:14:39.725 "io_mechanism": "io_uring_cmd", 00:14:39.725 "conserve_cpu": true, 00:14:39.725 "filename": "/dev/ng0n1", 00:14:39.725 "name": "xnvme_bdev" 00:14:39.725 }, 00:14:39.725 "method": "bdev_xnvme_create" 00:14:39.725 }, 00:14:39.725 { 00:14:39.725 "method": "bdev_wait_for_examine" 00:14:39.725 } 00:14:39.725 ] 00:14:39.725 } 00:14:39.725 ] 00:14:39.725 } 00:14:39.725 [2024-12-16 21:20:29.277766] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:39.725 [2024-12-16 21:20:29.277916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84564 ] 00:14:39.986 [2024-12-16 21:20:29.427427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.986 [2024-12-16 21:20:29.468279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.986 Running I/O for 5 seconds... 00:14:42.320 74752.00 IOPS, 292.00 MiB/s [2024-12-16T21:20:32.965Z] 76512.00 IOPS, 298.88 MiB/s [2024-12-16T21:20:33.906Z] 77205.33 IOPS, 301.58 MiB/s [2024-12-16T21:20:34.843Z] 76960.00 IOPS, 300.62 MiB/s 00:14:45.143 Latency(us) 00:14:45.143 [2024-12-16T21:20:34.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.143 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:45.143 xnvme_bdev : 5.00 80109.90 312.93 0.00 0.00 795.41 403.30 3881.75 00:14:45.143 [2024-12-16T21:20:34.843Z] =================================================================================================================== 00:14:45.143 [2024-12-16T21:20:34.843Z] Total : 80109.90 312.93 0.00 0.00 795.41 403.30 3881.75 00:14:45.143 21:20:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:45.143 21:20:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:45.143 21:20:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:45.143 21:20:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:45.143 21:20:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.143 { 00:14:45.143 "subsystems": [ 00:14:45.143 { 00:14:45.143 "subsystem": "bdev", 00:14:45.143 "config": [ 00:14:45.143 { 00:14:45.143 "params": { 00:14:45.143 "io_mechanism": "io_uring_cmd", 00:14:45.143 "conserve_cpu": true, 00:14:45.143 "filename": "/dev/ng0n1", 00:14:45.143 "name": "xnvme_bdev" 00:14:45.143 }, 00:14:45.143 "method": "bdev_xnvme_create" 00:14:45.143 }, 00:14:45.143 { 00:14:45.143 "method": "bdev_wait_for_examine" 00:14:45.143 } 00:14:45.143 ] 00:14:45.143 } 00:14:45.143 ] 00:14:45.143 } 00:14:45.404 [2024-12-16 21:20:34.845521] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:45.404 [2024-12-16 21:20:34.846579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84631 ] 00:14:45.404 [2024-12-16 21:20:35.003320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.404 [2024-12-16 21:20:35.032139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.662 Running I/O for 5 seconds... 00:14:47.537 44156.00 IOPS, 172.48 MiB/s [2024-12-16T21:20:38.175Z] 42582.00 IOPS, 166.34 MiB/s [2024-12-16T21:20:39.556Z] 43710.33 IOPS, 170.74 MiB/s [2024-12-16T21:20:40.496Z] 41390.75 IOPS, 161.68 MiB/s [2024-12-16T21:20:40.496Z] 39494.60 IOPS, 154.28 MiB/s 00:14:50.796 Latency(us) 00:14:50.796 [2024-12-16T21:20:40.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.796 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:50.796 xnvme_bdev : 5.00 39483.51 154.23 0.00 0.00 1615.71 85.07 22988.01 00:14:50.796 [2024-12-16T21:20:40.496Z] =================================================================================================================== 00:14:50.796 [2024-12-16T21:20:40.496Z] Total : 39483.51 154.23 0.00 0.00 1615.71 85.07 22988.01 00:14:50.796 00:14:50.796 real 0m22.314s 00:14:50.796 user 0m14.833s 00:14:50.796 sys 0m5.386s 00:14:50.796 21:20:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.796 ************************************ 00:14:50.796 END TEST xnvme_bdevperf 00:14:50.796 ************************************ 00:14:50.796 21:20:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.796 21:20:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:50.796 21:20:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:50.796 21:20:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.796 21:20:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.796 ************************************ 00:14:50.796 START TEST xnvme_fio_plugin 00:14:50.796 ************************************ 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:50.796 21:20:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.796 { 00:14:50.796 "subsystems": [ 00:14:50.796 { 00:14:50.796 "subsystem": "bdev", 00:14:50.796 "config": [ 00:14:50.796 { 00:14:50.796 "params": { 00:14:50.796 "io_mechanism": "io_uring_cmd", 00:14:50.796 "conserve_cpu": true, 00:14:50.796 "filename": "/dev/ng0n1", 00:14:50.797 "name": "xnvme_bdev" 00:14:50.797 }, 00:14:50.797 "method": "bdev_xnvme_create" 00:14:50.797 }, 00:14:50.797 { 00:14:50.797 "method": "bdev_wait_for_examine" 00:14:50.797 } 00:14:50.797 ] 00:14:50.797 } 00:14:50.797 ] 00:14:50.797 } 00:14:51.058 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.058 fio-3.35 00:14:51.058 Starting 1 thread 00:14:56.456 00:14:56.456 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84738: Mon Dec 16 21:20:45 2024 00:14:56.456 read: IOPS=37.4k, BW=146MiB/s (153MB/s)(730MiB/5001msec) 00:14:56.456 slat (usec): min=2, max=155, avg= 3.84, stdev= 1.91 00:14:56.456 clat (usec): min=970, max=2972, avg=1556.96, stdev=223.49 00:14:56.456 lat (usec): min=973, max=2999, avg=1560.80, stdev=223.77 00:14:56.456 clat percentiles (usec): 00:14:56.456 | 1.00th=[ 1172], 5.00th=[ 1254], 10.00th=[ 1303], 20.00th=[ 1369], 00:14:56.456 | 30.00th=[ 1418], 40.00th=[ 1467], 50.00th=[ 1516], 60.00th=[ 1582], 00:14:56.456 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1860], 95.00th=[ 1975], 00:14:56.456 | 99.00th=[ 2212], 99.50th=[ 2311], 99.90th=[ 2606], 99.95th=[ 2769], 00:14:56.456 | 99.99th=[ 2900] 00:14:56.456 bw ( KiB/s): min=144896, max=156160, per=100.00%, avg=150926.22, stdev=3479.88, samples=9 00:14:56.456 iops : min=36224, max=39040, avg=37731.56, stdev=869.97, samples=9 00:14:56.456 lat (usec) : 1000=0.01% 00:14:56.456 lat (msec) : 2=95.63%, 4=4.37% 00:14:56.456 cpu : usr=46.94%, sys=49.60%, ctx=16, majf=0, minf=1063 00:14:56.456 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:56.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.456 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:56.456 issued rwts: total=186880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.456 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:56.456 00:14:56.456 Run status group 0 (all jobs): 00:14:56.456 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=730MiB (765MB), run=5001-5001msec 00:14:56.718 ----------------------------------------------------- 00:14:56.718 Suppressions used: 00:14:56.718 count bytes template 00:14:56.718 1 11 /usr/src/fio/parse.c 00:14:56.718 1 8 libtcmalloc_minimal.so 00:14:56.718 1 904 libcrypto.so 00:14:56.718 ----------------------------------------------------- 00:14:56.718 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:56.718 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:56.979 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:56.979 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:56.979 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:56.979 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:56.979 21:20:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.979 { 00:14:56.979 "subsystems": [ 00:14:56.979 { 00:14:56.979 "subsystem": "bdev", 00:14:56.980 "config": [ 00:14:56.980 { 00:14:56.980 "params": { 00:14:56.980 "io_mechanism": "io_uring_cmd", 00:14:56.980 "conserve_cpu": true, 00:14:56.980 "filename": "/dev/ng0n1", 00:14:56.980 "name": "xnvme_bdev" 00:14:56.980 }, 00:14:56.980 "method": "bdev_xnvme_create" 00:14:56.980 }, 00:14:56.980 { 00:14:56.980 "method": "bdev_wait_for_examine" 00:14:56.980 } 00:14:56.980 ] 00:14:56.980 } 00:14:56.980 ] 00:14:56.980 } 00:14:56.980 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:56.980 fio-3.35 00:14:56.980 Starting 1 thread 00:15:03.564 00:15:03.565 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84818: Mon Dec 16 21:20:51 2024 00:15:03.565 write: IOPS=37.7k, BW=147MiB/s (155MB/s)(737MiB/5002msec); 0 zone resets 00:15:03.565 slat (usec): min=2, max=314, avg= 4.20, stdev= 3.44 00:15:03.565 clat (usec): min=203, max=6025, avg=1535.58, stdev=339.77 00:15:03.565 lat (usec): min=207, max=6029, avg=1539.78, stdev=340.11 00:15:03.565 clat percentiles (usec): 00:15:03.565 | 1.00th=[ 881], 5.00th=[ 1057], 10.00th=[ 1156], 20.00th=[ 1270], 00:15:03.565 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1516], 60.00th=[ 1582], 00:15:03.565 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2073], 00:15:03.565 | 99.00th=[ 2474], 99.50th=[ 2933], 99.90th=[ 3785], 99.95th=[ 4359], 00:15:03.565 | 99.99th=[ 5080] 00:15:03.565 bw ( KiB/s): min=145480, max=155888, per=100.00%, avg=151453.33, stdev=3684.25, samples=9 00:15:03.565 iops : min=36370, max=38972, avg=37863.33, stdev=921.06, samples=9 00:15:03.565 lat (usec) : 250=0.01%, 500=0.03%, 750=0.18%, 1000=2.91% 00:15:03.565 lat (msec) : 2=89.87%, 4=6.94%, 10=0.08% 00:15:03.565 cpu : usr=47.49%, sys=44.73%, ctx=12, majf=0, minf=1064 00:15:03.565 IO depths : 1=1.2%, 2=2.6%, 4=5.5%, 8=11.7%, 16=24.8%, 32=52.4%, >=64=1.8% 00:15:03.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.565 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:03.565 issued rwts: total=0,188718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:03.565 00:15:03.565 Run status group 0 (all jobs): 00:15:03.565 WRITE: bw=147MiB/s (155MB/s), 147MiB/s-147MiB/s (155MB/s-155MB/s), io=737MiB (773MB), run=5002-5002msec 00:15:03.565 ----------------------------------------------------- 00:15:03.565 Suppressions used: 00:15:03.565 count bytes template 00:15:03.565 1 11 /usr/src/fio/parse.c 00:15:03.565 1 8 libtcmalloc_minimal.so 00:15:03.565 1 904 libcrypto.so 00:15:03.565 ----------------------------------------------------- 00:15:03.565 00:15:03.565 ************************************ 00:15:03.565 END TEST xnvme_fio_plugin 00:15:03.565 00:15:03.565 real 0m11.995s 00:15:03.565 user 0m5.871s 00:15:03.565 sys 0m5.252s 00:15:03.565 21:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.565 21:20:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:03.565 ************************************ 00:15:03.565 21:20:52 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84369 00:15:03.565 21:20:52 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84369 ']' 00:15:03.565 21:20:52 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84369 00:15:03.565 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84369) - No such process 00:15:03.565 Process with pid 84369 is not found 00:15:03.565 21:20:52 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84369 is not found' 00:15:03.565 00:15:03.565 real 2m58.902s 00:15:03.565 user 1m27.967s 00:15:03.565 sys 1m17.074s 00:15:03.565 ************************************ 00:15:03.565 END TEST nvme_xnvme 00:15:03.565 ************************************ 00:15:03.565 21:20:52 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.565 21:20:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.565 21:20:52 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:03.565 21:20:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:03.565 21:20:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.565 21:20:52 -- common/autotest_common.sh@10 -- # set +x 00:15:03.565 ************************************ 00:15:03.565 START TEST blockdev_xnvme 00:15:03.565 ************************************ 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:03.565 * Looking for test storage... 00:15:03.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.565 21:20:52 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:03.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.565 --rc genhtml_branch_coverage=1 00:15:03.565 --rc genhtml_function_coverage=1 00:15:03.565 --rc genhtml_legend=1 00:15:03.565 --rc geninfo_all_blocks=1 00:15:03.565 --rc geninfo_unexecuted_blocks=1 00:15:03.565 00:15:03.565 ' 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:03.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.565 --rc genhtml_branch_coverage=1 00:15:03.565 --rc genhtml_function_coverage=1 00:15:03.565 --rc genhtml_legend=1 00:15:03.565 --rc geninfo_all_blocks=1 00:15:03.565 --rc geninfo_unexecuted_blocks=1 00:15:03.565 00:15:03.565 ' 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:03.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.565 --rc genhtml_branch_coverage=1 00:15:03.565 --rc genhtml_function_coverage=1 00:15:03.565 --rc genhtml_legend=1 00:15:03.565 --rc geninfo_all_blocks=1 00:15:03.565 --rc geninfo_unexecuted_blocks=1 00:15:03.565 00:15:03.565 ' 00:15:03.565 21:20:52 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:03.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.565 --rc genhtml_branch_coverage=1 00:15:03.565 --rc genhtml_function_coverage=1 00:15:03.565 --rc genhtml_legend=1 00:15:03.565 --rc geninfo_all_blocks=1 00:15:03.565 --rc geninfo_unexecuted_blocks=1 00:15:03.565 00:15:03.565 ' 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84949 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:03.565 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 84949 00:15:03.566 21:20:52 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:03.566 21:20:52 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 84949 ']' 00:15:03.566 21:20:52 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.566 21:20:52 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.566 21:20:52 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.566 21:20:52 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.566 21:20:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.566 [2024-12-16 21:20:52.738308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:03.566 [2024-12-16 21:20:52.738608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84949 ] 00:15:03.566 [2024-12-16 21:20:52.884882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.566 [2024-12-16 21:20:52.915472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.137 21:20:53 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.137 21:20:53 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:04.138 21:20:53 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:04.138 21:20:53 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:04.138 21:20:53 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:04.138 21:20:53 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:04.138 21:20:53 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:04.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:04.972 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:04.972 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:04.972 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:04.972 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:04.972 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0c0n1 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:04.972 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n2 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n3 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring -c' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:05.235 nvme0n1 00:15:05.235 nvme1n1 00:15:05.235 nvme1n2 00:15:05.235 nvme1n3 00:15:05.235 nvme2n1 00:15:05.235 nvme3n1 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.235 21:20:54 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.235 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:05.236 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2f7e65e2-d726-4184-9f39-d9d474ea2c30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2f7e65e2-d726-4184-9f39-d9d474ea2c30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9ca6f993-f8d6-4923-8be9-2eb09d2c8c7c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9ca6f993-f8d6-4923-8be9-2eb09d2c8c7c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "ed5d4c9d-cc63-4cca-b69d-3bfbc0a2dce0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed5d4c9d-cc63-4cca-b69d-3bfbc0a2dce0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "e97393db-0e8e-48ba-8dca-f7fc95ff5428"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e97393db-0e8e-48ba-8dca-f7fc95ff5428",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "51760ef4-2ac4-418a-b9a6-0982a9498f89"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "51760ef4-2ac4-418a-b9a6-0982a9498f89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8b3f64e9-1896-4eeb-b62b-56e2b36a71c7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8b3f64e9-1896-4eeb-b62b-56e2b36a71c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:05.236 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:05.236 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:05.236 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:05.236 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:05.236 21:20:54 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 84949 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84949 ']' 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 84949 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84949 00:15:05.236 killing process with pid 84949 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84949' 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 84949 00:15:05.236 21:20:54 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 84949 00:15:05.808 21:20:55 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:05.808 21:20:55 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:05.808 21:20:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:05.808 21:20:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.808 21:20:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.808 ************************************ 00:15:05.808 START TEST bdev_hello_world 00:15:05.808 ************************************ 00:15:05.808 21:20:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:05.808 [2024-12-16 21:20:55.481556] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:05.808 [2024-12-16 21:20:55.481892] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85221 ] 00:15:06.069 [2024-12-16 21:20:55.627311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.069 [2024-12-16 21:20:55.666792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.330 [2024-12-16 21:20:55.932076] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:06.330 [2024-12-16 21:20:55.932147] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:06.330 [2024-12-16 21:20:55.932174] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:06.330 [2024-12-16 21:20:55.934710] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:06.330 [2024-12-16 21:20:55.935774] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:06.330 [2024-12-16 21:20:55.935826] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:06.330 [2024-12-16 21:20:55.936481] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:06.330 00:15:06.330 [2024-12-16 21:20:55.936521] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:06.592 ************************************ 00:15:06.592 END TEST bdev_hello_world 00:15:06.592 ************************************ 00:15:06.592 00:15:06.592 real 0m0.774s 00:15:06.592 user 0m0.394s 00:15:06.592 sys 0m0.234s 00:15:06.592 21:20:56 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.592 21:20:56 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 21:20:56 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:06.592 21:20:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:06.592 21:20:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.592 21:20:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 ************************************ 00:15:06.592 START TEST bdev_bounds 00:15:06.592 ************************************ 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85252 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85252' 00:15:06.592 Process bdevio pid: 85252 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85252 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85252 ']' 00:15:06.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:06.592 21:20:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:06.853 [2024-12-16 21:20:56.339042] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:06.853 [2024-12-16 21:20:56.339523] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85252 ] 00:15:06.853 [2024-12-16 21:20:56.489036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.853 [2024-12-16 21:20:56.534368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.853 [2024-12-16 21:20:56.534745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.853 [2024-12-16 21:20:56.534829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.798 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.798 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:07.798 21:20:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:07.798 I/O targets: 00:15:07.798 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:07.798 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:07.798 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:07.798 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:07.798 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:07.798 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:07.798 00:15:07.798 00:15:07.798 CUnit - A unit testing framework for C - Version 2.1-3 00:15:07.798 http://cunit.sourceforge.net/ 00:15:07.798 00:15:07.798 00:15:07.798 Suite: bdevio tests on: nvme3n1 00:15:07.798 Test: blockdev write read block ...passed 00:15:07.798 Test: blockdev write zeroes read block ...passed 00:15:07.798 Test: blockdev write zeroes read no split ...passed 00:15:07.798 Test: blockdev write zeroes read split ...passed 00:15:07.798 Test: blockdev write zeroes read split partial ...passed 00:15:07.798 Test: blockdev reset ...passed 00:15:07.798 Test: blockdev write read 8 blocks ...passed 00:15:07.798 Test: blockdev write read size > 128k ...passed 00:15:07.798 Test: blockdev write read invalid size ...passed 00:15:07.798 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.798 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.798 Test: blockdev write read max offset ...passed 00:15:07.798 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.798 Test: blockdev writev readv 8 blocks ...passed 00:15:07.798 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.798 Test: blockdev writev readv block ...passed 00:15:07.798 Test: blockdev writev readv size > 128k ...passed 00:15:07.798 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.799 Test: blockdev comparev and writev ...passed 00:15:07.799 Test: blockdev nvme passthru rw ...passed 00:15:07.799 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.799 Test: blockdev nvme admin passthru ...passed 00:15:07.799 Test: blockdev copy ...passed 00:15:07.799 Suite: bdevio tests on: nvme2n1 00:15:07.799 Test: blockdev write read block ...passed 00:15:07.799 Test: blockdev write zeroes read block ...passed 00:15:07.799 Test: blockdev write zeroes read no split ...passed 00:15:07.799 Test: blockdev write zeroes read split ...passed 00:15:07.799 Test: blockdev write zeroes read split partial ...passed 00:15:07.799 Test: blockdev reset ...passed 00:15:07.799 Test: blockdev write read 8 blocks ...passed 00:15:07.799 Test: blockdev write read size > 128k ...passed 00:15:07.799 Test: blockdev write read invalid size ...passed 00:15:07.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.799 Test: blockdev write read max offset ...passed 00:15:07.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.799 Test: blockdev writev readv 8 blocks ...passed 00:15:07.799 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.799 Test: blockdev writev readv block ...passed 00:15:07.799 Test: blockdev writev readv size > 128k ...passed 00:15:07.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.799 Test: blockdev comparev and writev ...passed 00:15:07.799 Test: blockdev nvme passthru rw ...passed 00:15:07.799 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.799 Test: blockdev nvme admin passthru ...passed 00:15:07.799 Test: blockdev copy ...passed 00:15:07.799 Suite: bdevio tests on: nvme1n3 00:15:07.799 Test: blockdev write read block ...passed 00:15:07.799 Test: blockdev write zeroes read block ...passed 00:15:07.799 Test: blockdev write zeroes read no split ...passed 00:15:07.799 Test: blockdev write zeroes read split ...passed 00:15:07.799 Test: blockdev write zeroes read split partial ...passed 00:15:07.799 Test: blockdev reset ...passed 00:15:07.799 Test: blockdev write read 8 blocks ...passed 00:15:07.799 Test: blockdev write read size > 128k ...passed 00:15:07.799 Test: blockdev write read invalid size ...passed 00:15:07.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.799 Test: blockdev write read max offset ...passed 00:15:07.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.799 Test: blockdev writev readv 8 blocks ...passed 00:15:07.799 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.799 Test: blockdev writev readv block ...passed 00:15:07.799 Test: blockdev writev readv size > 128k ...passed 00:15:07.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.799 Test: blockdev comparev and writev ...passed 00:15:07.799 Test: blockdev nvme passthru rw ...passed 00:15:07.799 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.799 Test: blockdev nvme admin passthru ...passed 00:15:07.799 Test: blockdev copy ...passed 00:15:07.799 Suite: bdevio tests on: nvme1n2 00:15:07.799 Test: blockdev write read block ...passed 00:15:07.799 Test: blockdev write zeroes read block ...passed 00:15:07.799 Test: blockdev write zeroes read no split ...passed 00:15:07.799 Test: blockdev write zeroes read split ...passed 00:15:07.799 Test: blockdev write zeroes read split partial ...passed 00:15:07.799 Test: blockdev reset ...passed 00:15:07.799 Test: blockdev write read 8 blocks ...passed 00:15:07.799 Test: blockdev write read size > 128k ...passed 00:15:07.799 Test: blockdev write read invalid size ...passed 00:15:07.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.799 Test: blockdev write read max offset ...passed 00:15:07.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.799 Test: blockdev writev readv 8 blocks ...passed 00:15:07.799 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.799 Test: blockdev writev readv block ...passed 00:15:07.799 Test: blockdev writev readv size > 128k ...passed 00:15:07.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.799 Test: blockdev comparev and writev ...passed 00:15:07.799 Test: blockdev nvme passthru rw ...passed 00:15:07.799 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.799 Test: blockdev nvme admin passthru ...passed 00:15:07.799 Test: blockdev copy ...passed 00:15:07.799 Suite: bdevio tests on: nvme1n1 00:15:07.799 Test: blockdev write read block ...passed 00:15:07.799 Test: blockdev write zeroes read block ...passed 00:15:07.799 Test: blockdev write zeroes read no split ...passed 00:15:07.799 Test: blockdev write zeroes read split ...passed 00:15:07.799 Test: blockdev write zeroes read split partial ...passed 00:15:07.799 Test: blockdev reset ...passed 00:15:07.799 Test: blockdev write read 8 blocks ...passed 00:15:07.799 Test: blockdev write read size > 128k ...passed 00:15:07.799 Test: blockdev write read invalid size ...passed 00:15:07.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.799 Test: blockdev write read max offset ...passed 00:15:08.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.061 Test: blockdev writev readv 8 blocks ...passed 00:15:08.061 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.061 Test: blockdev writev readv block ...passed 00:15:08.061 Test: blockdev writev readv size > 128k ...passed 00:15:08.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.061 Test: blockdev comparev and writev ...passed 00:15:08.061 Test: blockdev nvme passthru rw ...passed 00:15:08.061 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.061 Test: blockdev nvme admin passthru ...passed 00:15:08.061 Test: blockdev copy ...passed 00:15:08.061 Suite: bdevio tests on: nvme0n1 00:15:08.061 Test: blockdev write read block ...passed 00:15:08.061 Test: blockdev write zeroes read block ...passed 00:15:08.061 Test: blockdev write zeroes read no split ...passed 00:15:08.061 Test: blockdev write zeroes read split ...passed 00:15:08.061 Test: blockdev write zeroes read split partial ...passed 00:15:08.061 Test: blockdev reset ...passed 00:15:08.061 Test: blockdev write read 8 blocks ...passed 00:15:08.061 Test: blockdev write read size > 128k ...passed 00:15:08.061 Test: blockdev write read invalid size ...passed 00:15:08.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:08.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:08.061 Test: blockdev write read max offset ...passed 00:15:08.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:08.061 Test: blockdev writev readv 8 blocks ...passed 00:15:08.062 Test: blockdev writev readv 30 x 1block ...passed 00:15:08.062 Test: blockdev writev readv block ...passed 00:15:08.062 Test: blockdev writev readv size > 128k ...passed 00:15:08.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:08.062 Test: blockdev comparev and writev ...passed 00:15:08.062 Test: blockdev nvme passthru rw ...passed 00:15:08.062 Test: blockdev nvme passthru vendor specific ...passed 00:15:08.062 Test: blockdev nvme admin passthru ...passed 00:15:08.062 Test: blockdev copy ...passed 00:15:08.062 00:15:08.062 Run Summary: Type Total Ran Passed Failed Inactive 00:15:08.062 suites 6 6 n/a 0 0 00:15:08.062 tests 138 138 138 0 0 00:15:08.062 asserts 780 780 780 0 n/a 00:15:08.062 00:15:08.062 Elapsed time = 0.605 seconds 00:15:08.062 0 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85252 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85252 ']' 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85252 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85252 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85252' 00:15:08.062 killing process with pid 85252 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85252 00:15:08.062 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85252 00:15:08.323 21:20:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:08.323 00:15:08.323 real 0m1.608s 00:15:08.323 user 0m3.806s 00:15:08.323 sys 0m0.396s 00:15:08.323 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.323 ************************************ 00:15:08.323 21:20:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:08.323 END TEST bdev_bounds 00:15:08.323 ************************************ 00:15:08.323 21:20:57 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:15:08.323 21:20:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:08.323 21:20:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.323 21:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.323 ************************************ 00:15:08.323 START TEST bdev_nbd 00:15:08.323 ************************************ 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85303 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85303 /var/tmp/spdk-nbd.sock 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85303 ']' 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:08.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.323 21:20:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:08.323 [2024-12-16 21:20:58.022579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:08.323 [2024-12-16 21:20:58.022910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.584 [2024-12-16 21:20:58.170689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.584 [2024-12-16 21:20:58.210171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.528 21:20:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.528 1+0 records in 00:15:09.528 1+0 records out 00:15:09.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108582 s, 3.8 MB/s 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.528 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.789 1+0 records in 00:15:09.789 1+0 records out 00:15:09.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881524 s, 4.6 MB/s 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.789 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.051 1+0 records in 00:15:10.051 1+0 records out 00:15:10.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137938 s, 3.0 MB/s 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.051 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.312 1+0 records in 00:15:10.312 1+0 records out 00:15:10.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00142788 s, 2.9 MB/s 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.312 21:20:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.573 1+0 records in 00:15:10.573 1+0 records out 00:15:10.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154335 s, 2.7 MB/s 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.573 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.835 1+0 records in 00:15:10.835 1+0 records out 00:15:10.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101084 s, 4.1 MB/s 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.835 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd0", 00:15:11.096 "bdev_name": "nvme0n1" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd1", 00:15:11.096 "bdev_name": "nvme1n1" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd2", 00:15:11.096 "bdev_name": "nvme1n2" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd3", 00:15:11.096 "bdev_name": "nvme1n3" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd4", 00:15:11.096 "bdev_name": "nvme2n1" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd5", 00:15:11.096 "bdev_name": "nvme3n1" 00:15:11.096 } 00:15:11.096 ]' 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd0", 00:15:11.096 "bdev_name": "nvme0n1" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd1", 00:15:11.096 "bdev_name": "nvme1n1" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd2", 00:15:11.096 "bdev_name": "nvme1n2" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd3", 00:15:11.096 "bdev_name": "nvme1n3" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd4", 00:15:11.096 "bdev_name": "nvme2n1" 00:15:11.096 }, 00:15:11.096 { 00:15:11.096 "nbd_device": "/dev/nbd5", 00:15:11.096 "bdev_name": "nvme3n1" 00:15:11.096 } 00:15:11.096 ]' 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.096 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.358 21:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:11.619 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.619 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.619 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.619 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.619 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.619 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.619 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.620 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.620 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.620 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.882 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.143 21:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.404 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:12.665 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:12.925 /dev/nbd0 00:15:12.925 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.925 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.925 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:12.925 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:12.925 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.926 1+0 records in 00:15:12.926 1+0 records out 00:15:12.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122535 s, 3.3 MB/s 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:12.926 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:13.187 /dev/nbd1 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.187 1+0 records in 00:15:13.187 1+0 records out 00:15:13.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996042 s, 4.1 MB/s 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.187 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.188 21:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:15:13.448 /dev/nbd10 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.448 1+0 records in 00:15:13.448 1+0 records out 00:15:13.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000943051 s, 4.3 MB/s 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.448 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:15:13.713 /dev/nbd11 00:15:13.713 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:13.713 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:13.713 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:13.713 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:13.713 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.714 1+0 records in 00:15:13.714 1+0 records out 00:15:13.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116913 s, 3.5 MB/s 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.714 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:13.975 /dev/nbd12 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.975 1+0 records in 00:15:13.975 1+0 records out 00:15:13.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00156466 s, 2.6 MB/s 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.975 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:14.234 /dev/nbd13 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.235 1+0 records in 00:15:14.235 1+0 records out 00:15:14.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108947 s, 3.8 MB/s 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.235 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:14.495 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:14.495 { 00:15:14.495 "nbd_device": "/dev/nbd0", 00:15:14.495 "bdev_name": "nvme0n1" 00:15:14.495 }, 00:15:14.495 { 00:15:14.495 "nbd_device": "/dev/nbd1", 00:15:14.495 "bdev_name": "nvme1n1" 00:15:14.495 }, 00:15:14.495 { 00:15:14.495 "nbd_device": "/dev/nbd10", 00:15:14.495 "bdev_name": "nvme1n2" 00:15:14.495 }, 00:15:14.495 { 00:15:14.495 "nbd_device": "/dev/nbd11", 00:15:14.495 "bdev_name": "nvme1n3" 00:15:14.495 }, 00:15:14.495 { 00:15:14.496 "nbd_device": "/dev/nbd12", 00:15:14.496 "bdev_name": "nvme2n1" 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "nbd_device": "/dev/nbd13", 00:15:14.496 "bdev_name": "nvme3n1" 00:15:14.496 } 00:15:14.496 ]' 00:15:14.496 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:14.496 { 00:15:14.496 "nbd_device": "/dev/nbd0", 00:15:14.496 "bdev_name": "nvme0n1" 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "nbd_device": "/dev/nbd1", 00:15:14.496 "bdev_name": "nvme1n1" 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "nbd_device": "/dev/nbd10", 00:15:14.496 "bdev_name": "nvme1n2" 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "nbd_device": "/dev/nbd11", 00:15:14.496 "bdev_name": "nvme1n3" 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "nbd_device": "/dev/nbd12", 00:15:14.496 "bdev_name": "nvme2n1" 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "nbd_device": "/dev/nbd13", 00:15:14.496 "bdev_name": "nvme3n1" 00:15:14.496 } 00:15:14.496 ]' 00:15:14.496 21:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:14.496 /dev/nbd1 00:15:14.496 /dev/nbd10 00:15:14.496 /dev/nbd11 00:15:14.496 /dev/nbd12 00:15:14.496 /dev/nbd13' 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:14.496 /dev/nbd1 00:15:14.496 /dev/nbd10 00:15:14.496 /dev/nbd11 00:15:14.496 /dev/nbd12 00:15:14.496 /dev/nbd13' 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:14.496 256+0 records in 00:15:14.496 256+0 records out 00:15:14.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684733 s, 153 MB/s 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:14.496 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:14.757 256+0 records in 00:15:14.757 256+0 records out 00:15:14.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243273 s, 4.3 MB/s 00:15:14.757 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:14.757 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:15.018 256+0 records in 00:15:15.018 256+0 records out 00:15:15.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247889 s, 4.2 MB/s 00:15:15.018 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.018 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:15.280 256+0 records in 00:15:15.280 256+0 records out 00:15:15.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230093 s, 4.6 MB/s 00:15:15.280 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.280 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:15.280 256+0 records in 00:15:15.280 256+0 records out 00:15:15.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190393 s, 5.5 MB/s 00:15:15.280 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.280 21:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:15.853 256+0 records in 00:15:15.853 256+0 records out 00:15:15.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.30761 s, 3.4 MB/s 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:15.853 256+0 records in 00:15:15.853 256+0 records out 00:15:15.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238322 s, 4.4 MB/s 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.853 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.115 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:16.376 21:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.376 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.637 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.898 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.159 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:17.420 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.420 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.420 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:17.420 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.420 21:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:17.420 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:17.681 malloc_lvol_verify 00:15:17.681 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:17.972 0f441d62-a112-4f9f-a76e-0638af3249ed 00:15:17.972 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:18.234 d3f8a697-dbc3-4105-a9af-7550fe471949 00:15:18.234 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:18.234 /dev/nbd0 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:18.495 mke2fs 1.47.0 (5-Feb-2023) 00:15:18.495 Discarding device blocks: 0/4096 done 00:15:18.495 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:18.495 00:15:18.495 Allocating group tables: 0/1 done 00:15:18.495 Writing inode tables: 0/1 done 00:15:18.495 Creating journal (1024 blocks): done 00:15:18.495 Writing superblocks and filesystem accounting information: 0/1 done 00:15:18.495 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.495 21:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85303 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85303 ']' 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85303 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85303 00:15:18.495 killing process with pid 85303 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85303' 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85303 00:15:18.495 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85303 00:15:18.756 ************************************ 00:15:18.756 END TEST bdev_nbd 00:15:18.756 21:21:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:18.756 00:15:18.756 real 0m10.402s 00:15:18.756 user 0m14.019s 00:15:18.756 sys 0m3.839s 00:15:18.756 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.756 21:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:18.756 ************************************ 00:15:18.756 21:21:08 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:18.756 21:21:08 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:18.756 21:21:08 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:18.756 21:21:08 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:18.756 21:21:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.756 21:21:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.756 21:21:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.756 ************************************ 00:15:18.756 START TEST bdev_fio 00:15:18.756 ************************************ 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:18.756 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:18.756 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n2]' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n2 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n3]' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n3 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:19.018 ************************************ 00:15:19.018 START TEST bdev_fio_rw_verify 00:15:19.018 ************************************ 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:19.018 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:19.019 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:19.019 21:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.019 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.019 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.019 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.019 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.019 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.019 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.019 fio-3.35 00:15:19.019 Starting 6 threads 00:15:31.260 00:15:31.260 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85704: Mon Dec 16 21:21:19 2024 00:15:31.260 read: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(553MiB/10001msec) 00:15:31.260 slat (usec): min=2, max=2777, avg= 7.87, stdev=22.37 00:15:31.260 clat (usec): min=104, max=9949, avg=1347.85, stdev=776.48 00:15:31.260 lat (usec): min=108, max=9963, avg=1355.72, stdev=777.27 00:15:31.260 clat percentiles (usec): 00:15:31.260 | 50.000th=[ 1237], 99.000th=[ 3785], 99.900th=[ 5407], 99.990th=[ 7504], 00:15:31.260 | 99.999th=[ 9896] 00:15:31.260 write: IOPS=14.6k, BW=56.9MiB/s (59.7MB/s)(569MiB/10001msec); 0 zone resets 00:15:31.260 slat (usec): min=10, max=4368, avg=45.27, stdev=150.57 00:15:31.260 clat (usec): min=87, max=10867, avg=1631.64, stdev=837.10 00:15:31.260 lat (usec): min=105, max=10885, avg=1676.91, stdev=850.40 00:15:31.260 clat percentiles (usec): 00:15:31.260 | 50.000th=[ 1500], 99.000th=[ 4293], 99.900th=[ 5669], 99.990th=[ 8717], 00:15:31.260 | 99.999th=[ 9503] 00:15:31.260 bw ( KiB/s): min=48700, max=74758, per=100.00%, avg=58296.11, stdev=1327.54, samples=114 00:15:31.260 iops : min=12172, max=18688, avg=14573.05, stdev=331.87, samples=114 00:15:31.260 lat (usec) : 100=0.01%, 250=1.57%, 500=6.69%, 750=9.33%, 1000=11.91% 00:15:31.261 lat (msec) : 2=48.24%, 4=21.17%, 10=1.09%, 20=0.01% 00:15:31.261 cpu : usr=40.97%, sys=34.14%, ctx=5642, majf=0, minf=16352 00:15:31.261 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.261 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.261 issued rwts: total=141653,145727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.261 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:31.261 00:15:31.261 Run status group 0 (all jobs): 00:15:31.261 READ: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=553MiB (580MB), run=10001-10001msec 00:15:31.261 WRITE: bw=56.9MiB/s (59.7MB/s), 56.9MiB/s-56.9MiB/s (59.7MB/s-59.7MB/s), io=569MiB (597MB), run=10001-10001msec 00:15:31.261 ----------------------------------------------------- 00:15:31.261 Suppressions used: 00:15:31.261 count bytes template 00:15:31.261 6 48 /usr/src/fio/parse.c 00:15:31.261 3992 383232 /usr/src/fio/iolog.c 00:15:31.261 1 8 libtcmalloc_minimal.so 00:15:31.261 1 904 libcrypto.so 00:15:31.261 ----------------------------------------------------- 00:15:31.261 00:15:31.261 00:15:31.261 real 0m11.126s 00:15:31.261 user 0m25.303s 00:15:31.261 sys 0m20.783s 00:15:31.261 ************************************ 00:15:31.261 END TEST bdev_fio_rw_verify 00:15:31.261 ************************************ 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2f7e65e2-d726-4184-9f39-d9d474ea2c30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2f7e65e2-d726-4184-9f39-d9d474ea2c30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9ca6f993-f8d6-4923-8be9-2eb09d2c8c7c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9ca6f993-f8d6-4923-8be9-2eb09d2c8c7c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "ed5d4c9d-cc63-4cca-b69d-3bfbc0a2dce0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed5d4c9d-cc63-4cca-b69d-3bfbc0a2dce0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "e97393db-0e8e-48ba-8dca-f7fc95ff5428"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e97393db-0e8e-48ba-8dca-f7fc95ff5428",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "51760ef4-2ac4-418a-b9a6-0982a9498f89"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "51760ef4-2ac4-418a-b9a6-0982a9498f89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8b3f64e9-1896-4eeb-b62b-56e2b36a71c7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8b3f64e9-1896-4eeb-b62b-56e2b36a71c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.261 /home/vagrant/spdk_repo/spdk 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:31.261 00:15:31.261 real 0m11.308s 00:15:31.261 user 0m25.378s 00:15:31.261 sys 0m20.867s 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.261 ************************************ 00:15:31.261 END TEST bdev_fio 00:15:31.261 ************************************ 00:15:31.261 21:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 21:21:19 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:31.261 21:21:19 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:31.261 21:21:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:31.261 21:21:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.261 21:21:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.261 ************************************ 00:15:31.261 START TEST bdev_verify 00:15:31.261 ************************************ 00:15:31.261 21:21:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:31.261 [2024-12-16 21:21:19.859187] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:31.261 [2024-12-16 21:21:19.859333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85871 ] 00:15:31.261 [2024-12-16 21:21:20.008199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:31.261 [2024-12-16 21:21:20.051831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.261 [2024-12-16 21:21:20.051893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.261 Running I/O for 5 seconds... 00:15:33.150 23296.00 IOPS, 91.00 MiB/s [2024-12-16T21:21:23.793Z] 23680.00 IOPS, 92.50 MiB/s [2024-12-16T21:21:24.736Z] 24160.00 IOPS, 94.38 MiB/s [2024-12-16T21:21:25.678Z] 24056.00 IOPS, 93.97 MiB/s [2024-12-16T21:21:25.678Z] 24102.40 IOPS, 94.15 MiB/s 00:15:35.978 Latency(us) 00:15:35.978 [2024-12-16T21:21:25.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.978 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x0 length 0x20000 00:15:35.978 nvme0n1 : 5.05 1927.59 7.53 0.00 0.00 66289.56 10788.23 62914.56 00:15:35.978 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x20000 length 0x20000 00:15:35.978 nvme0n1 : 5.04 1826.79 7.14 0.00 0.00 69938.02 13107.20 77433.30 00:15:35.978 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x0 length 0x80000 00:15:35.978 nvme1n1 : 5.04 1931.37 7.54 0.00 0.00 66033.06 9578.34 59688.17 00:15:35.978 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x80000 length 0x80000 00:15:35.978 nvme1n1 : 5.05 1826.02 7.13 0.00 0.00 69810.96 14115.45 66544.25 00:15:35.978 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x0 length 0x80000 00:15:35.978 nvme1n2 : 5.05 1925.78 7.52 0.00 0.00 66120.38 7965.14 62914.56 00:15:35.978 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x80000 length 0x80000 00:15:35.978 nvme1n2 : 5.06 1821.52 7.12 0.00 0.00 69839.40 10082.46 66947.54 00:15:35.978 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x0 length 0x80000 00:15:35.978 nvme1n3 : 5.05 1925.12 7.52 0.00 0.00 66036.06 10435.35 70173.93 00:15:35.978 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x80000 length 0x80000 00:15:35.978 nvme1n3 : 5.06 1821.00 7.11 0.00 0.00 69690.89 11141.12 68560.74 00:15:35.978 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x0 length 0xbd0bd 00:15:35.978 nvme2n1 : 5.07 2598.45 10.15 0.00 0.00 48774.19 6427.57 58074.98 00:15:35.978 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:35.978 nvme2n1 : 5.07 2410.05 9.41 0.00 0.00 52498.23 6604.01 54041.99 00:15:35.978 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:35.978 Verification LBA range: start 0x0 length 0xa0000 00:15:35.978 nvme3n1 : 5.06 1971.98 7.70 0.00 0.00 64278.01 7158.55 67350.84 00:15:35.978 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.979 Verification LBA range: start 0xa0000 length 0xa0000 00:15:35.979 nvme3n1 : 5.08 1865.04 7.29 0.00 0.00 67812.26 4663.14 76223.41 00:15:35.979 [2024-12-16T21:21:25.679Z] =================================================================================================================== 00:15:35.979 [2024-12-16T21:21:25.679Z] Total : 23850.72 93.17 0.00 0.00 63955.05 4663.14 77433.30 00:15:36.239 00:15:36.239 real 0m6.006s 00:15:36.240 user 0m9.460s 00:15:36.240 sys 0m1.615s 00:15:36.240 21:21:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.240 ************************************ 00:15:36.240 END TEST bdev_verify 00:15:36.240 ************************************ 00:15:36.240 21:21:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:36.240 21:21:25 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:36.240 21:21:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:36.240 21:21:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.240 21:21:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.240 ************************************ 00:15:36.240 START TEST bdev_verify_big_io 00:15:36.240 ************************************ 00:15:36.240 21:21:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:36.240 [2024-12-16 21:21:25.937080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:36.240 [2024-12-16 21:21:25.937209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85962 ] 00:15:36.500 [2024-12-16 21:21:26.082173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:36.500 [2024-12-16 21:21:26.122125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.500 [2024-12-16 21:21:26.122236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.761 Running I/O for 5 seconds... 00:15:42.870 1160.00 IOPS, 72.50 MiB/s [2024-12-16T21:21:33.142Z] 2872.00 IOPS, 179.50 MiB/s 00:15:43.442 Latency(us) 00:15:43.442 [2024-12-16T21:21:33.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.442 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x0 length 0x2000 00:15:43.442 nvme0n1 : 5.80 126.83 7.93 0.00 0.00 990408.14 41136.44 1109877.37 00:15:43.442 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x2000 length 0x2000 00:15:43.442 nvme0n1 : 6.00 85.40 5.34 0.00 0.00 1409660.46 253271.43 1806777.11 00:15:43.442 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x0 length 0x8000 00:15:43.442 nvme1n1 : 5.79 85.64 5.35 0.00 0.00 1405295.12 52025.50 2529487.95 00:15:43.442 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x8000 length 0x8000 00:15:43.442 nvme1n1 : 6.03 84.96 5.31 0.00 0.00 1345268.18 206488.81 1458327.24 00:15:43.442 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x0 length 0x8000 00:15:43.442 nvme1n2 : 5.81 132.27 8.27 0.00 0.00 892894.15 5797.42 1174405.12 00:15:43.442 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x8000 length 0x8000 00:15:43.442 nvme1n2 : 6.06 73.89 4.62 0.00 0.00 1481880.59 64527.75 1961643.72 00:15:43.442 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x0 length 0x8000 00:15:43.442 nvme1n3 : 5.86 109.20 6.82 0.00 0.00 1040947.99 41338.09 1238932.87 00:15:43.442 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x8000 length 0x8000 00:15:43.442 nvme1n3 : 6.15 125.11 7.82 0.00 0.00 829603.26 7208.96 1167952.34 00:15:43.442 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x0 length 0xbd0b 00:15:43.442 nvme2n1 : 5.80 121.37 7.59 0.00 0.00 905841.57 35691.91 955010.76 00:15:43.442 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:43.442 nvme2n1 : 6.32 141.76 8.86 0.00 0.00 703911.30 5923.45 3432876.50 00:15:43.442 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0x0 length 0xa000 00:15:43.442 nvme3n1 : 5.95 121.02 7.56 0.00 0.00 883463.55 1001.94 1251838.42 00:15:43.442 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.442 Verification LBA range: start 0xa000 length 0xa000 00:15:43.442 nvme3n1 : 6.51 209.00 13.06 0.00 0.00 456533.29 1039.75 3794231.93 00:15:43.442 [2024-12-16T21:21:33.142Z] =================================================================================================================== 00:15:43.442 [2024-12-16T21:21:33.142Z] Total : 1416.44 88.53 0.00 0.00 936302.19 1001.94 3794231.93 00:15:43.704 00:15:43.704 real 0m7.429s 00:15:43.704 user 0m13.721s 00:15:43.704 sys 0m0.434s 00:15:43.704 ************************************ 00:15:43.704 END TEST bdev_verify_big_io 00:15:43.704 21:21:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.704 21:21:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.704 ************************************ 00:15:43.704 21:21:33 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:43.704 21:21:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:43.704 21:21:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.704 21:21:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.704 ************************************ 00:15:43.704 START TEST bdev_write_zeroes 00:15:43.704 ************************************ 00:15:43.704 21:21:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:43.965 [2024-12-16 21:21:33.440159] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:43.965 [2024-12-16 21:21:33.440293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86068 ] 00:15:43.965 [2024-12-16 21:21:33.587884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.965 [2024-12-16 21:21:33.625968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.226 Running I/O for 1 seconds... 00:15:45.613 79136.00 IOPS, 309.12 MiB/s 00:15:45.613 Latency(us) 00:15:45.613 [2024-12-16T21:21:35.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.613 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.613 nvme0n1 : 1.03 12851.62 50.20 0.00 0.00 9949.41 6654.42 22483.89 00:15:45.613 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.613 nvme1n1 : 1.02 12911.95 50.44 0.00 0.00 9892.30 6856.07 23391.31 00:15:45.613 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.613 nvme1n2 : 1.02 12895.87 50.37 0.00 0.00 9893.86 7007.31 21576.47 00:15:45.613 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.613 nvme1n3 : 1.02 12881.21 50.32 0.00 0.00 9896.15 7007.31 20870.70 00:15:45.613 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.613 nvme2n1 : 1.02 13862.61 54.15 0.00 0.00 9186.73 4864.79 21173.17 00:15:45.613 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.613 nvme3n1 : 1.02 12866.23 50.26 0.00 0.00 9841.63 3957.37 22483.89 00:15:45.613 [2024-12-16T21:21:35.313Z] =================================================================================================================== 00:15:45.613 [2024-12-16T21:21:35.313Z] Total : 78269.49 305.74 0.00 0.00 9769.84 3957.37 23391.31 00:15:45.613 00:15:45.613 real 0m1.854s 00:15:45.613 user 0m1.125s 00:15:45.613 sys 0m0.535s 00:15:45.613 21:21:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.613 ************************************ 00:15:45.613 END TEST bdev_write_zeroes 00:15:45.613 21:21:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:45.613 ************************************ 00:15:45.613 21:21:35 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:45.613 21:21:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:45.613 21:21:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.613 21:21:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.613 ************************************ 00:15:45.613 START TEST bdev_json_nonenclosed 00:15:45.613 ************************************ 00:15:45.613 21:21:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:45.873 [2024-12-16 21:21:35.375586] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:45.873 [2024-12-16 21:21:35.375764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86110 ] 00:15:45.873 [2024-12-16 21:21:35.524060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.873 [2024-12-16 21:21:35.565359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.873 [2024-12-16 21:21:35.565485] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:45.873 [2024-12-16 21:21:35.565504] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:45.873 [2024-12-16 21:21:35.565519] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.135 00:15:46.135 real 0m0.357s 00:15:46.135 user 0m0.144s 00:15:46.135 sys 0m0.108s 00:15:46.135 21:21:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.135 21:21:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:46.135 ************************************ 00:15:46.135 END TEST bdev_json_nonenclosed 00:15:46.135 ************************************ 00:15:46.135 21:21:35 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.135 21:21:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:46.135 21:21:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.135 21:21:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.135 ************************************ 00:15:46.135 START TEST bdev_json_nonarray 00:15:46.135 ************************************ 00:15:46.135 21:21:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.135 [2024-12-16 21:21:35.804384] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:46.135 [2024-12-16 21:21:35.804528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86135 ] 00:15:46.428 [2024-12-16 21:21:35.948718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.428 [2024-12-16 21:21:35.987792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.428 [2024-12-16 21:21:35.987940] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:46.428 [2024-12-16 21:21:35.987961] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:46.428 [2024-12-16 21:21:35.987978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.428 00:15:46.428 real 0m0.346s 00:15:46.428 user 0m0.136s 00:15:46.428 sys 0m0.105s 00:15:46.428 21:21:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.428 ************************************ 00:15:46.428 21:21:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:46.428 END TEST bdev_json_nonarray 00:15:46.428 ************************************ 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:46.690 21:21:36 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:47.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.572 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.572 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.572 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.572 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.572 00:15:50.572 real 0m47.711s 00:15:50.572 user 1m12.076s 00:15:50.572 sys 0m40.030s 00:15:50.572 21:21:40 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.572 ************************************ 00:15:50.572 21:21:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.572 END TEST blockdev_xnvme 00:15:50.572 ************************************ 00:15:50.572 21:21:40 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:50.572 21:21:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:50.572 21:21:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.572 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:15:50.572 ************************************ 00:15:50.572 START TEST ublk 00:15:50.572 ************************************ 00:15:50.572 21:21:40 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:50.833 * Looking for test storage... 00:15:50.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:50.833 21:21:40 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:50.833 21:21:40 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:15:50.833 21:21:40 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:50.833 21:21:40 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:50.833 21:21:40 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.833 21:21:40 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.833 21:21:40 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.833 21:21:40 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.833 21:21:40 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.833 21:21:40 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.833 21:21:40 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.834 21:21:40 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.834 21:21:40 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.834 21:21:40 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.834 21:21:40 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.834 21:21:40 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:50.834 21:21:40 ublk -- scripts/common.sh@345 -- # : 1 00:15:50.834 21:21:40 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.834 21:21:40 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.834 21:21:40 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:50.834 21:21:40 ublk -- scripts/common.sh@353 -- # local d=1 00:15:50.834 21:21:40 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.834 21:21:40 ublk -- scripts/common.sh@355 -- # echo 1 00:15:50.834 21:21:40 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.834 21:21:40 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:50.834 21:21:40 ublk -- scripts/common.sh@353 -- # local d=2 00:15:50.834 21:21:40 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.834 21:21:40 ublk -- scripts/common.sh@355 -- # echo 2 00:15:50.834 21:21:40 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.834 21:21:40 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.834 21:21:40 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.834 21:21:40 ublk -- scripts/common.sh@368 -- # return 0 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:50.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.834 --rc genhtml_branch_coverage=1 00:15:50.834 --rc genhtml_function_coverage=1 00:15:50.834 --rc genhtml_legend=1 00:15:50.834 --rc geninfo_all_blocks=1 00:15:50.834 --rc geninfo_unexecuted_blocks=1 00:15:50.834 00:15:50.834 ' 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:50.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.834 --rc genhtml_branch_coverage=1 00:15:50.834 --rc genhtml_function_coverage=1 00:15:50.834 --rc genhtml_legend=1 00:15:50.834 --rc geninfo_all_blocks=1 00:15:50.834 --rc geninfo_unexecuted_blocks=1 00:15:50.834 00:15:50.834 ' 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:50.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.834 --rc genhtml_branch_coverage=1 00:15:50.834 --rc genhtml_function_coverage=1 00:15:50.834 --rc genhtml_legend=1 00:15:50.834 --rc geninfo_all_blocks=1 00:15:50.834 --rc geninfo_unexecuted_blocks=1 00:15:50.834 00:15:50.834 ' 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:50.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.834 --rc genhtml_branch_coverage=1 00:15:50.834 --rc genhtml_function_coverage=1 00:15:50.834 --rc genhtml_legend=1 00:15:50.834 --rc geninfo_all_blocks=1 00:15:50.834 --rc geninfo_unexecuted_blocks=1 00:15:50.834 00:15:50.834 ' 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:50.834 21:21:40 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:50.834 21:21:40 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:50.834 21:21:40 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:50.834 21:21:40 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:50.834 21:21:40 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:50.834 21:21:40 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:50.834 21:21:40 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:50.834 21:21:40 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:50.834 21:21:40 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.834 21:21:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.834 ************************************ 00:15:50.834 START TEST test_save_ublk_config 00:15:50.834 ************************************ 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86431 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86431 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86431 ']' 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.834 21:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.834 [2024-12-16 21:21:40.523231] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:50.834 [2024-12-16 21:21:40.523348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86431 ] 00:15:51.109 [2024-12-16 21:21:40.667855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.109 [2024-12-16 21:21:40.708573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.751 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.751 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:51.751 21:21:41 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:51.751 21:21:41 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:51.751 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 [2024-12-16 21:21:41.382656] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:51.751 [2024-12-16 21:21:41.383821] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:51.751 malloc0 00:15:51.752 [2024-12-16 21:21:41.422783] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:51.752 [2024-12-16 21:21:41.422878] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:51.752 [2024-12-16 21:21:41.422887] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:51.752 [2024-12-16 21:21:41.422902] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:51.752 [2024-12-16 21:21:41.431809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:51.752 [2024-12-16 21:21:41.431861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:51.752 [2024-12-16 21:21:41.438664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:51.752 [2024-12-16 21:21:41.438823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:52.012 [2024-12-16 21:21:41.455661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:52.012 0 00:15:52.012 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.012 21:21:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:52.012 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.012 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:52.274 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.274 21:21:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:52.274 "subsystems": [ 00:15:52.274 { 00:15:52.274 "subsystem": "fsdev", 00:15:52.274 "config": [ 00:15:52.274 { 00:15:52.274 "method": "fsdev_set_opts", 00:15:52.274 "params": { 00:15:52.274 "fsdev_io_pool_size": 65535, 00:15:52.274 "fsdev_io_cache_size": 256 00:15:52.274 } 00:15:52.274 } 00:15:52.274 ] 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "subsystem": "keyring", 00:15:52.274 "config": [] 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "subsystem": "iobuf", 00:15:52.274 "config": [ 00:15:52.274 { 00:15:52.274 "method": "iobuf_set_options", 00:15:52.274 "params": { 00:15:52.274 "small_pool_count": 8192, 00:15:52.274 "large_pool_count": 1024, 00:15:52.274 "small_bufsize": 8192, 00:15:52.274 "large_bufsize": 135168, 00:15:52.274 "enable_numa": false 00:15:52.274 } 00:15:52.274 } 00:15:52.274 ] 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "subsystem": "sock", 00:15:52.274 "config": [ 00:15:52.274 { 00:15:52.274 "method": "sock_set_default_impl", 00:15:52.274 "params": { 00:15:52.274 "impl_name": "posix" 00:15:52.274 } 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "method": "sock_impl_set_options", 00:15:52.274 "params": { 00:15:52.274 "impl_name": "ssl", 00:15:52.274 "recv_buf_size": 4096, 00:15:52.274 "send_buf_size": 4096, 00:15:52.274 "enable_recv_pipe": true, 00:15:52.274 "enable_quickack": false, 00:15:52.274 "enable_placement_id": 0, 00:15:52.274 "enable_zerocopy_send_server": true, 00:15:52.274 "enable_zerocopy_send_client": false, 00:15:52.274 "zerocopy_threshold": 0, 00:15:52.274 "tls_version": 0, 00:15:52.274 "enable_ktls": false 00:15:52.274 } 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "method": "sock_impl_set_options", 00:15:52.274 "params": { 00:15:52.274 "impl_name": "posix", 00:15:52.274 "recv_buf_size": 2097152, 00:15:52.274 "send_buf_size": 2097152, 00:15:52.274 "enable_recv_pipe": true, 00:15:52.274 "enable_quickack": false, 00:15:52.274 "enable_placement_id": 0, 00:15:52.274 "enable_zerocopy_send_server": true, 00:15:52.274 "enable_zerocopy_send_client": false, 00:15:52.274 "zerocopy_threshold": 0, 00:15:52.274 "tls_version": 0, 00:15:52.274 "enable_ktls": false 00:15:52.274 } 00:15:52.274 } 00:15:52.274 ] 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "subsystem": "vmd", 00:15:52.274 "config": [] 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "subsystem": "accel", 00:15:52.274 "config": [ 00:15:52.274 { 00:15:52.274 "method": "accel_set_options", 00:15:52.274 "params": { 00:15:52.274 "small_cache_size": 128, 00:15:52.274 "large_cache_size": 16, 00:15:52.274 "task_count": 2048, 00:15:52.274 "sequence_count": 2048, 00:15:52.274 "buf_count": 2048 00:15:52.274 } 00:15:52.274 } 00:15:52.274 ] 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "subsystem": "bdev", 00:15:52.274 "config": [ 00:15:52.274 { 00:15:52.274 "method": "bdev_set_options", 00:15:52.274 "params": { 00:15:52.274 "bdev_io_pool_size": 65535, 00:15:52.274 "bdev_io_cache_size": 256, 00:15:52.274 "bdev_auto_examine": true, 00:15:52.274 "iobuf_small_cache_size": 128, 00:15:52.274 "iobuf_large_cache_size": 16 00:15:52.274 } 00:15:52.274 }, 00:15:52.274 { 00:15:52.274 "method": "bdev_raid_set_options", 00:15:52.275 "params": { 00:15:52.275 "process_window_size_kb": 1024, 00:15:52.275 "process_max_bandwidth_mb_sec": 0 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "bdev_iscsi_set_options", 00:15:52.275 "params": { 00:15:52.275 "timeout_sec": 30 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "bdev_nvme_set_options", 00:15:52.275 "params": { 00:15:52.275 "action_on_timeout": "none", 00:15:52.275 "timeout_us": 0, 00:15:52.275 "timeout_admin_us": 0, 00:15:52.275 "keep_alive_timeout_ms": 10000, 00:15:52.275 "arbitration_burst": 0, 00:15:52.275 "low_priority_weight": 0, 00:15:52.275 "medium_priority_weight": 0, 00:15:52.275 "high_priority_weight": 0, 00:15:52.275 "nvme_adminq_poll_period_us": 10000, 00:15:52.275 "nvme_ioq_poll_period_us": 0, 00:15:52.275 "io_queue_requests": 0, 00:15:52.275 "delay_cmd_submit": true, 00:15:52.275 "transport_retry_count": 4, 00:15:52.275 "bdev_retry_count": 3, 00:15:52.275 "transport_ack_timeout": 0, 00:15:52.275 "ctrlr_loss_timeout_sec": 0, 00:15:52.275 "reconnect_delay_sec": 0, 00:15:52.275 "fast_io_fail_timeout_sec": 0, 00:15:52.275 "disable_auto_failback": false, 00:15:52.275 "generate_uuids": false, 00:15:52.275 "transport_tos": 0, 00:15:52.275 "nvme_error_stat": false, 00:15:52.275 "rdma_srq_size": 0, 00:15:52.275 "io_path_stat": false, 00:15:52.275 "allow_accel_sequence": false, 00:15:52.275 "rdma_max_cq_size": 0, 00:15:52.275 "rdma_cm_event_timeout_ms": 0, 00:15:52.275 "dhchap_digests": [ 00:15:52.275 "sha256", 00:15:52.275 "sha384", 00:15:52.275 "sha512" 00:15:52.275 ], 00:15:52.275 "dhchap_dhgroups": [ 00:15:52.275 "null", 00:15:52.275 "ffdhe2048", 00:15:52.275 "ffdhe3072", 00:15:52.275 "ffdhe4096", 00:15:52.275 "ffdhe6144", 00:15:52.275 "ffdhe8192" 00:15:52.275 ], 00:15:52.275 "rdma_umr_per_io": false 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "bdev_nvme_set_hotplug", 00:15:52.275 "params": { 00:15:52.275 "period_us": 100000, 00:15:52.275 "enable": false 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "bdev_malloc_create", 00:15:52.275 "params": { 00:15:52.275 "name": "malloc0", 00:15:52.275 "num_blocks": 8192, 00:15:52.275 "block_size": 4096, 00:15:52.275 "physical_block_size": 4096, 00:15:52.275 "uuid": "7a8a305e-4a3a-4e75-ab3f-2f68139048e4", 00:15:52.275 "optimal_io_boundary": 0, 00:15:52.275 "md_size": 0, 00:15:52.275 "dif_type": 0, 00:15:52.275 "dif_is_head_of_md": false, 00:15:52.275 "dif_pi_format": 0 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "bdev_wait_for_examine" 00:15:52.275 } 00:15:52.275 ] 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "scsi", 00:15:52.275 "config": null 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "scheduler", 00:15:52.275 "config": [ 00:15:52.275 { 00:15:52.275 "method": "framework_set_scheduler", 00:15:52.275 "params": { 00:15:52.275 "name": "static" 00:15:52.275 } 00:15:52.275 } 00:15:52.275 ] 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "vhost_scsi", 00:15:52.275 "config": [] 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "vhost_blk", 00:15:52.275 "config": [] 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "ublk", 00:15:52.275 "config": [ 00:15:52.275 { 00:15:52.275 "method": "ublk_create_target", 00:15:52.275 "params": { 00:15:52.275 "cpumask": "1" 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "ublk_start_disk", 00:15:52.275 "params": { 00:15:52.275 "bdev_name": "malloc0", 00:15:52.275 "ublk_id": 0, 00:15:52.275 "num_queues": 1, 00:15:52.275 "queue_depth": 128 00:15:52.275 } 00:15:52.275 } 00:15:52.275 ] 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "nbd", 00:15:52.275 "config": [] 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "nvmf", 00:15:52.275 "config": [ 00:15:52.275 { 00:15:52.275 "method": "nvmf_set_config", 00:15:52.275 "params": { 00:15:52.275 "discovery_filter": "match_any", 00:15:52.275 "admin_cmd_passthru": { 00:15:52.275 "identify_ctrlr": false 00:15:52.275 }, 00:15:52.275 "dhchap_digests": [ 00:15:52.275 "sha256", 00:15:52.275 "sha384", 00:15:52.275 "sha512" 00:15:52.275 ], 00:15:52.275 "dhchap_dhgroups": [ 00:15:52.275 "null", 00:15:52.275 "ffdhe2048", 00:15:52.275 "ffdhe3072", 00:15:52.275 "ffdhe4096", 00:15:52.275 "ffdhe6144", 00:15:52.275 "ffdhe8192" 00:15:52.275 ] 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "nvmf_set_max_subsystems", 00:15:52.275 "params": { 00:15:52.275 "max_subsystems": 1024 00:15:52.275 } 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "method": "nvmf_set_crdt", 00:15:52.275 "params": { 00:15:52.275 "crdt1": 0, 00:15:52.275 "crdt2": 0, 00:15:52.275 "crdt3": 0 00:15:52.275 } 00:15:52.275 } 00:15:52.275 ] 00:15:52.275 }, 00:15:52.275 { 00:15:52.275 "subsystem": "iscsi", 00:15:52.275 "config": [ 00:15:52.275 { 00:15:52.275 "method": "iscsi_set_options", 00:15:52.275 "params": { 00:15:52.275 "node_base": "iqn.2016-06.io.spdk", 00:15:52.275 "max_sessions": 128, 00:15:52.275 "max_connections_per_session": 2, 00:15:52.275 "max_queue_depth": 64, 00:15:52.275 "default_time2wait": 2, 00:15:52.275 "default_time2retain": 20, 00:15:52.275 "first_burst_length": 8192, 00:15:52.275 "immediate_data": true, 00:15:52.275 "allow_duplicated_isid": false, 00:15:52.275 "error_recovery_level": 0, 00:15:52.276 "nop_timeout": 60, 00:15:52.276 "nop_in_interval": 30, 00:15:52.276 "disable_chap": false, 00:15:52.276 "require_chap": false, 00:15:52.276 "mutual_chap": false, 00:15:52.276 "chap_group": 0, 00:15:52.276 "max_large_datain_per_connection": 64, 00:15:52.276 "max_r2t_per_connection": 4, 00:15:52.276 "pdu_pool_size": 36864, 00:15:52.276 "immediate_data_pool_size": 16384, 00:15:52.276 "data_out_pool_size": 2048 00:15:52.276 } 00:15:52.276 } 00:15:52.276 ] 00:15:52.276 } 00:15:52.276 ] 00:15:52.276 }' 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86431 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86431 ']' 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86431 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86431 00:15:52.276 killing process with pid 86431 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86431' 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86431 00:15:52.276 21:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86431 00:15:52.537 [2024-12-16 21:21:42.183908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:52.537 [2024-12-16 21:21:42.221641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:52.537 [2024-12-16 21:21:42.221795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:52.537 [2024-12-16 21:21:42.230646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:52.537 [2024-12-16 21:21:42.230713] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:52.537 [2024-12-16 21:21:42.230722] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:52.537 [2024-12-16 21:21:42.230751] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:52.537 [2024-12-16 21:21:42.230901] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86469 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86469 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86469 ']' 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:53.111 21:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:53.111 "subsystems": [ 00:15:53.111 { 00:15:53.111 "subsystem": "fsdev", 00:15:53.111 "config": [ 00:15:53.111 { 00:15:53.111 "method": "fsdev_set_opts", 00:15:53.111 "params": { 00:15:53.111 "fsdev_io_pool_size": 65535, 00:15:53.111 "fsdev_io_cache_size": 256 00:15:53.111 } 00:15:53.111 } 00:15:53.111 ] 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "subsystem": "keyring", 00:15:53.111 "config": [] 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "subsystem": "iobuf", 00:15:53.111 "config": [ 00:15:53.111 { 00:15:53.111 "method": "iobuf_set_options", 00:15:53.111 "params": { 00:15:53.111 "small_pool_count": 8192, 00:15:53.111 "large_pool_count": 1024, 00:15:53.111 "small_bufsize": 8192, 00:15:53.111 "large_bufsize": 135168, 00:15:53.111 "enable_numa": false 00:15:53.111 } 00:15:53.111 } 00:15:53.111 ] 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "subsystem": "sock", 00:15:53.111 "config": [ 00:15:53.111 { 00:15:53.111 "method": "sock_set_default_impl", 00:15:53.111 "params": { 00:15:53.111 "impl_name": "posix" 00:15:53.111 } 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "method": "sock_impl_set_options", 00:15:53.111 "params": { 00:15:53.111 "impl_name": "ssl", 00:15:53.111 "recv_buf_size": 4096, 00:15:53.111 "send_buf_size": 4096, 00:15:53.111 "enable_recv_pipe": true, 00:15:53.111 "enable_quickack": false, 00:15:53.111 "enable_placement_id": 0, 00:15:53.111 "enable_zerocopy_send_server": true, 00:15:53.111 "enable_zerocopy_send_client": false, 00:15:53.111 "zerocopy_threshold": 0, 00:15:53.111 "tls_version": 0, 00:15:53.111 "enable_ktls": false 00:15:53.111 } 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "method": "sock_impl_set_options", 00:15:53.111 "params": { 00:15:53.111 "impl_name": "posix", 00:15:53.111 "recv_buf_size": 2097152, 00:15:53.111 "send_buf_size": 2097152, 00:15:53.111 "enable_recv_pipe": true, 00:15:53.111 "enable_quickack": false, 00:15:53.111 "enable_placement_id": 0, 00:15:53.111 "enable_zerocopy_send_server": true, 00:15:53.111 "enable_zerocopy_send_client": false, 00:15:53.111 "zerocopy_threshold": 0, 00:15:53.111 "tls_version": 0, 00:15:53.111 "enable_ktls": false 00:15:53.111 } 00:15:53.111 } 00:15:53.111 ] 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "subsystem": "vmd", 00:15:53.111 "config": [] 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "subsystem": "accel", 00:15:53.111 "config": [ 00:15:53.111 { 00:15:53.111 "method": "accel_set_options", 00:15:53.111 "params": { 00:15:53.111 "small_cache_size": 128, 00:15:53.111 "large_cache_size": 16, 00:15:53.111 "task_count": 2048, 00:15:53.111 "sequence_count": 2048, 00:15:53.111 "buf_count": 2048 00:15:53.111 } 00:15:53.111 } 00:15:53.111 ] 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "subsystem": "bdev", 00:15:53.111 "config": [ 00:15:53.111 { 00:15:53.111 "method": "bdev_set_options", 00:15:53.111 "params": { 00:15:53.111 "bdev_io_pool_size": 65535, 00:15:53.111 "bdev_io_cache_size": 256, 00:15:53.111 "bdev_auto_examine": true, 00:15:53.111 "iobuf_small_cache_size": 128, 00:15:53.111 "iobuf_large_cache_size": 16 00:15:53.111 } 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "method": "bdev_raid_set_options", 00:15:53.111 "params": { 00:15:53.111 "process_window_size_kb": 1024, 00:15:53.111 "process_max_bandwidth_mb_sec": 0 00:15:53.111 } 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "method": "bdev_iscsi_set_options", 00:15:53.111 "params": { 00:15:53.111 "timeout_sec": 30 00:15:53.111 } 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "method": "bdev_nvme_set_options", 00:15:53.111 "params": { 00:15:53.111 "action_on_timeout": "none", 00:15:53.111 "timeout_us": 0, 00:15:53.111 "timeout_admin_us": 0, 00:15:53.111 "keep_alive_timeout_ms": 10000, 00:15:53.111 "arbitration_burst": 0, 00:15:53.111 "low_priority_weight": 0, 00:15:53.111 "medium_priority_weight": 0, 00:15:53.111 "high_priority_weight": 0, 00:15:53.111 "nvme_adminq_poll_period_us": 10000, 00:15:53.111 "nvme_ioq_poll_period_us": 0, 00:15:53.111 "io_queue_requests": 0, 00:15:53.111 "delay_cmd_submit": true, 00:15:53.111 "transport_retry_count": 4, 00:15:53.111 "bdev_retry_count": 3, 00:15:53.111 "transport_ack_timeout": 0, 00:15:53.111 "ctrlr_loss_timeout_sec": 0, 00:15:53.111 "reconnect_delay_sec": 0, 00:15:53.111 "fast_io_fail_timeout_sec": 0, 00:15:53.111 "disable_auto_failback": false, 00:15:53.111 "generate_uuids": false, 00:15:53.111 "transport_tos": 0, 00:15:53.111 "nvme_error_stat": false, 00:15:53.111 "rdma_srq_size": 0, 00:15:53.111 "io_path_stat": false, 00:15:53.111 "allow_accel_sequence": false, 00:15:53.111 "rdma_max_cq_size": 0, 00:15:53.111 "rdma_cm_event_timeout_ms": 0, 00:15:53.111 "dhchap_digests": [ 00:15:53.111 "sha256", 00:15:53.111 "sha384", 00:15:53.111 "sha512" 00:15:53.111 ], 00:15:53.111 "dhchap_dhgroups": [ 00:15:53.111 "null", 00:15:53.111 "ffdhe2048", 00:15:53.111 "ffdhe3072", 00:15:53.111 "ffdhe4096", 00:15:53.111 "ffdhe6144", 00:15:53.111 "ffdhe8192" 00:15:53.111 ], 00:15:53.111 "rdma_umr_per_io": false 00:15:53.111 } 00:15:53.111 }, 00:15:53.111 { 00:15:53.111 "method": "bdev_nvme_set_hotplug", 00:15:53.111 "params": { 00:15:53.111 "period_us": 100000, 00:15:53.111 "enable": false 00:15:53.111 } 00:15:53.111 }, 00:15:53.111 { 00:15:53.112 "method": "bdev_malloc_create", 00:15:53.112 "params": { 00:15:53.112 "name": "malloc0", 00:15:53.112 "num_blocks": 8192, 00:15:53.112 "block_size": 4096, 00:15:53.112 "physical_block_size": 4096, 00:15:53.112 "uuid": "7a8a305e-4a3a-4e75-ab3f-2f68139048e4", 00:15:53.112 "optimal_io_boundary": 0, 00:15:53.112 "md_size": 0, 00:15:53.112 "dif_type": 0, 00:15:53.112 "dif_is_head_of_md": false, 00:15:53.112 "dif_pi_format": 0 00:15:53.112 } 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "method": "bdev_wait_for_examine" 00:15:53.112 } 00:15:53.112 ] 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "scsi", 00:15:53.112 "config": null 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "scheduler", 00:15:53.112 "config": [ 00:15:53.112 { 00:15:53.112 "method": "framework_set_scheduler", 00:15:53.112 "params": { 00:15:53.112 "name": "static" 00:15:53.112 } 00:15:53.112 } 00:15:53.112 ] 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "vhost_scsi", 00:15:53.112 "config": [] 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "vhost_blk", 00:15:53.112 "config": [] 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "ublk", 00:15:53.112 "config": [ 00:15:53.112 { 00:15:53.112 "method": "ublk_create_target", 00:15:53.112 "params": { 00:15:53.112 "cpumask": "1" 00:15:53.112 } 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "method": "ublk_start_disk", 00:15:53.112 "params": { 00:15:53.112 "bdev_name": "malloc0", 00:15:53.112 "ublk_id": 0, 00:15:53.112 "num_queues": 1, 00:15:53.112 "queue_depth": 128 00:15:53.112 } 00:15:53.112 } 00:15:53.112 ] 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "nbd", 00:15:53.112 "config": [] 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "nvmf", 00:15:53.112 "config": [ 00:15:53.112 { 00:15:53.112 "method": "nvmf_set_config", 00:15:53.112 "params": { 00:15:53.112 "discovery_filter": "match_any", 00:15:53.112 "admin_cmd_passthru": { 00:15:53.112 "identify_ctrlr": false 00:15:53.112 }, 00:15:53.112 "dhchap_digests": [ 00:15:53.112 "sha256", 00:15:53.112 "sha384", 00:15:53.112 "sha512" 00:15:53.112 ], 00:15:53.112 "dhchap_dhgroups": [ 00:15:53.112 "null", 00:15:53.112 "ffdhe2048", 00:15:53.112 "ffdhe3072", 00:15:53.112 "ffdhe4096", 00:15:53.112 "ffdhe6144", 00:15:53.112 "ffdhe8192" 00:15:53.112 ] 00:15:53.112 } 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "method": "nvmf_set_max_subsystems", 00:15:53.112 "params": { 00:15:53.112 "max_subsystems": 1024 00:15:53.112 } 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "method": "nvmf_set_crdt", 00:15:53.112 "params": { 00:15:53.112 "crdt1": 0, 00:15:53.112 "crdt2": 0, 00:15:53.112 "crdt3": 0 00:15:53.112 } 00:15:53.112 } 00:15:53.112 ] 00:15:53.112 }, 00:15:53.112 { 00:15:53.112 "subsystem": "iscsi", 00:15:53.112 "config": [ 00:15:53.112 { 00:15:53.112 "method": "iscsi_set_options", 00:15:53.112 "params": { 00:15:53.112 "node_base": "iqn.2016-06.io.spdk", 00:15:53.112 "max_sessions": 128, 00:15:53.112 "max_connections_per_session": 2, 00:15:53.112 "max_queue_depth": 64, 00:15:53.112 "default_time2wait": 2, 00:15:53.112 "default_time2retain": 20, 00:15:53.112 "first_burst_length": 8192, 00:15:53.112 "immediate_data": true, 00:15:53.112 "allow_duplicated_isid": false, 00:15:53.112 "error_recovery_level": 0, 00:15:53.112 "nop_timeout": 60, 00:15:53.112 "nop_in_interval": 30, 00:15:53.112 "disable_chap": false, 00:15:53.112 "require_chap": false, 00:15:53.112 "mutual_chap": false, 00:15:53.112 "chap_group": 0, 00:15:53.112 "max_large_datain_per_connection": 64, 00:15:53.112 "max_r2t_per_connection": 4, 00:15:53.112 "pdu_pool_size": 36864, 00:15:53.112 "immediate_data_pool_size": 16384, 00:15:53.112 "data_out_pool_size": 2048 00:15:53.112 } 00:15:53.112 } 00:15:53.112 ] 00:15:53.112 } 00:15:53.112 ] 00:15:53.112 }' 00:15:53.112 21:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:53.373 [2024-12-16 21:21:42.853652] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:53.373 [2024-12-16 21:21:42.853810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86469 ] 00:15:53.373 [2024-12-16 21:21:43.002685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.373 [2024-12-16 21:21:43.043782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.945 [2024-12-16 21:21:43.527653] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:53.945 [2024-12-16 21:21:43.528109] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:53.945 [2024-12-16 21:21:43.535809] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:53.945 [2024-12-16 21:21:43.535906] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:53.945 [2024-12-16 21:21:43.535914] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:53.945 [2024-12-16 21:21:43.535930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:53.945 [2024-12-16 21:21:43.544793] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:53.945 [2024-12-16 21:21:43.544836] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:53.945 [2024-12-16 21:21:43.551672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:53.945 [2024-12-16 21:21:43.551822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:53.945 [2024-12-16 21:21:43.568658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86469 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86469 ']' 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86469 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86469 00:15:54.206 killing process with pid 86469 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86469' 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86469 00:15:54.206 21:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86469 00:15:54.468 [2024-12-16 21:21:44.146154] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.730 [2024-12-16 21:21:44.191797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.730 [2024-12-16 21:21:44.191967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.730 [2024-12-16 21:21:44.199677] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.730 [2024-12-16 21:21:44.199748] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:54.730 [2024-12-16 21:21:44.199766] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:54.730 [2024-12-16 21:21:44.199811] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:54.730 [2024-12-16 21:21:44.199983] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:55.301 21:21:44 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:55.301 ************************************ 00:15:55.301 END TEST test_save_ublk_config 00:15:55.301 ************************************ 00:15:55.301 00:15:55.301 real 0m4.375s 00:15:55.301 user 0m2.811s 00:15:55.301 sys 0m2.211s 00:15:55.301 21:21:44 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.301 21:21:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:55.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.301 21:21:44 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86525 00:15:55.301 21:21:44 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.301 21:21:44 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86525 00:15:55.301 21:21:44 ublk -- common/autotest_common.sh@835 -- # '[' -z 86525 ']' 00:15:55.301 21:21:44 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.301 21:21:44 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:55.301 21:21:44 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.301 21:21:44 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.301 21:21:44 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.301 21:21:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.301 [2024-12-16 21:21:44.964604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:55.301 [2024-12-16 21:21:44.964790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86525 ] 00:15:55.566 [2024-12-16 21:21:45.119918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.566 [2024-12-16 21:21:45.146426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.566 [2024-12-16 21:21:45.146520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.136 21:21:45 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.136 21:21:45 ublk -- common/autotest_common.sh@868 -- # return 0 00:15:56.136 21:21:45 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:56.136 21:21:45 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:56.136 21:21:45 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.136 21:21:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.136 ************************************ 00:15:56.136 START TEST test_create_ublk 00:15:56.136 ************************************ 00:15:56.136 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:15:56.136 21:21:45 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:56.136 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.136 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.136 [2024-12-16 21:21:45.827658] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:56.136 [2024-12-16 21:21:45.829842] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:56.136 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.136 21:21:45 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:56.136 21:21:45 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:56.136 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.136 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.397 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.397 21:21:45 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:56.397 21:21:45 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:56.397 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.397 21:21:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.397 [2024-12-16 21:21:45.953848] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:56.397 [2024-12-16 21:21:45.954365] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:56.397 [2024-12-16 21:21:45.954398] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:56.397 [2024-12-16 21:21:45.954409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:56.397 [2024-12-16 21:21:45.961682] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:56.397 [2024-12-16 21:21:45.961740] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:56.397 [2024-12-16 21:21:45.969687] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:56.397 [2024-12-16 21:21:45.970473] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:56.397 [2024-12-16 21:21:45.993675] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:56.397 21:21:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:56.397 21:21:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.397 21:21:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.397 21:21:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:56.397 { 00:15:56.397 "ublk_device": "/dev/ublkb0", 00:15:56.397 "id": 0, 00:15:56.397 "queue_depth": 512, 00:15:56.397 "num_queues": 4, 00:15:56.397 "bdev_name": "Malloc0" 00:15:56.397 } 00:15:56.397 ]' 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:56.397 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:56.659 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:56.659 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:56.659 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:56.659 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:56.659 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:56.659 21:21:46 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:56.659 21:21:46 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:56.659 fio: verification read phase will never start because write phase uses all of runtime 00:15:56.659 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:56.659 fio-3.35 00:15:56.659 Starting 1 process 00:16:08.866 00:16:08.866 fio_test: (groupid=0, jobs=1): err= 0: pid=86570: Mon Dec 16 21:21:56 2024 00:16:08.866 write: IOPS=16.0k, BW=62.6MiB/s (65.6MB/s)(626MiB/10001msec); 0 zone resets 00:16:08.866 clat (usec): min=35, max=4130, avg=61.67, stdev=82.00 00:16:08.866 lat (usec): min=36, max=4152, avg=62.07, stdev=82.03 00:16:08.866 clat percentiles (usec): 00:16:08.866 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:16:08.866 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:16:08.866 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 67], 95.00th=[ 73], 00:16:08.866 | 99.00th=[ 123], 99.50th=[ 139], 99.90th=[ 1336], 99.95th=[ 2474], 00:16:08.866 | 99.99th=[ 3359] 00:16:08.866 bw ( KiB/s): min=41400, max=68152, per=99.78%, avg=63967.16, stdev=7207.51, samples=19 00:16:08.866 iops : min=10350, max=17038, avg=15991.79, stdev=1801.88, samples=19 00:16:08.866 lat (usec) : 50=5.20%, 100=92.57%, 250=2.00%, 500=0.10%, 750=0.01% 00:16:08.866 lat (usec) : 1000=0.01% 00:16:08.866 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:16:08.866 cpu : usr=2.47%, sys=12.78%, ctx=160277, majf=0, minf=796 00:16:08.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.866 issued rwts: total=0,160288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.866 00:16:08.866 Run status group 0 (all jobs): 00:16:08.866 WRITE: bw=62.6MiB/s (65.6MB/s), 62.6MiB/s-62.6MiB/s (65.6MB/s-65.6MB/s), io=626MiB (657MB), run=10001-10001msec 00:16:08.866 00:16:08.866 Disk stats (read/write): 00:16:08.866 ublkb0: ios=0/158554, merge=0/0, ticks=0/8382, in_queue=8382, util=99.07% 00:16:08.866 21:21:56 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.866 [2024-12-16 21:21:56.435735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.866 [2024-12-16 21:21:56.471682] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.866 [2024-12-16 21:21:56.472349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.866 [2024-12-16 21:21:56.478641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.866 [2024-12-16 21:21:56.478895] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:08.866 [2024-12-16 21:21:56.478915] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.866 21:21:56 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:08.866 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 [2024-12-16 21:21:56.486732] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:08.867 request: 00:16:08.867 { 00:16:08.867 "ublk_id": 0, 00:16:08.867 "method": "ublk_stop_disk", 00:16:08.867 "req_id": 1 00:16:08.867 } 00:16:08.867 Got JSON-RPC error response 00:16:08.867 response: 00:16:08.867 { 00:16:08.867 "code": -19, 00:16:08.867 "message": "No such device" 00:16:08.867 } 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.867 21:21:56 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 [2024-12-16 21:21:56.502707] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:08.867 [2024-12-16 21:21:56.504575] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:08.867 [2024-12-16 21:21:56.504612] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:08.867 21:21:56 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:08.867 00:16:08.867 real 0m10.856s 00:16:08.867 user 0m0.545s 00:16:08.867 sys 0m1.369s 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 ************************************ 00:16:08.867 END TEST test_create_ublk 00:16:08.867 ************************************ 00:16:08.867 21:21:56 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:08.867 21:21:56 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:08.867 21:21:56 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.867 21:21:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 ************************************ 00:16:08.867 START TEST test_create_multi_ublk 00:16:08.867 ************************************ 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 [2024-12-16 21:21:56.722646] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:08.867 [2024-12-16 21:21:56.723787] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 [2024-12-16 21:21:56.807090] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:08.867 [2024-12-16 21:21:56.807408] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:08.867 [2024-12-16 21:21:56.807422] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:08.867 [2024-12-16 21:21:56.807428] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:08.867 [2024-12-16 21:21:56.830650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:08.867 [2024-12-16 21:21:56.830669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:08.867 [2024-12-16 21:21:56.842663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:08.867 [2024-12-16 21:21:56.843183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:08.867 [2024-12-16 21:21:56.878666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 [2024-12-16 21:21:56.986751] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:08.867 [2024-12-16 21:21:56.987066] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:08.867 [2024-12-16 21:21:56.987078] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:08.867 [2024-12-16 21:21:56.987084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:08.867 [2024-12-16 21:21:56.998670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:08.867 [2024-12-16 21:21:56.998692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:08.867 [2024-12-16 21:21:57.010652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:08.867 [2024-12-16 21:21:57.011169] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:08.867 [2024-12-16 21:21:57.046656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 [2024-12-16 21:21:57.154745] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:08.867 [2024-12-16 21:21:57.155056] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:08.867 [2024-12-16 21:21:57.155068] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:08.867 [2024-12-16 21:21:57.155073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:08.867 [2024-12-16 21:21:57.166669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:08.867 [2024-12-16 21:21:57.166686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:08.867 [2024-12-16 21:21:57.178668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:08.867 [2024-12-16 21:21:57.179183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:08.867 [2024-12-16 21:21:57.191667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.867 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 [2024-12-16 21:21:57.298738] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:08.868 [2024-12-16 21:21:57.299062] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:08.868 [2024-12-16 21:21:57.299075] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:08.868 [2024-12-16 21:21:57.299081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:08.868 [2024-12-16 21:21:57.311848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:08.868 [2024-12-16 21:21:57.311869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:08.868 [2024-12-16 21:21:57.322656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:08.868 [2024-12-16 21:21:57.323168] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:08.868 [2024-12-16 21:21:57.362664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:08.868 { 00:16:08.868 "ublk_device": "/dev/ublkb0", 00:16:08.868 "id": 0, 00:16:08.868 "queue_depth": 512, 00:16:08.868 "num_queues": 4, 00:16:08.868 "bdev_name": "Malloc0" 00:16:08.868 }, 00:16:08.868 { 00:16:08.868 "ublk_device": "/dev/ublkb1", 00:16:08.868 "id": 1, 00:16:08.868 "queue_depth": 512, 00:16:08.868 "num_queues": 4, 00:16:08.868 "bdev_name": "Malloc1" 00:16:08.868 }, 00:16:08.868 { 00:16:08.868 "ublk_device": "/dev/ublkb2", 00:16:08.868 "id": 2, 00:16:08.868 "queue_depth": 512, 00:16:08.868 "num_queues": 4, 00:16:08.868 "bdev_name": "Malloc2" 00:16:08.868 }, 00:16:08.868 { 00:16:08.868 "ublk_device": "/dev/ublkb3", 00:16:08.868 "id": 3, 00:16:08.868 "queue_depth": 512, 00:16:08.868 "num_queues": 4, 00:16:08.868 "bdev_name": "Malloc3" 00:16:08.868 } 00:16:08.868 ]' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:08.868 21:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 [2024-12-16 21:21:58.046720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.868 [2024-12-16 21:21:58.078650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.868 [2024-12-16 21:21:58.079545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.868 [2024-12-16 21:21:58.086652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.868 [2024-12-16 21:21:58.086911] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:08.868 [2024-12-16 21:21:58.086924] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 [2024-12-16 21:21:58.102720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.868 [2024-12-16 21:21:58.140676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.868 [2024-12-16 21:21:58.141470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.868 [2024-12-16 21:21:58.150657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.868 [2024-12-16 21:21:58.150912] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:08.868 [2024-12-16 21:21:58.150926] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 [2024-12-16 21:21:58.158718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.868 [2024-12-16 21:21:58.193691] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.868 [2024-12-16 21:21:58.194417] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.868 [2024-12-16 21:21:58.201651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.868 [2024-12-16 21:21:58.201885] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:08.868 [2024-12-16 21:21:58.201896] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 [2024-12-16 21:21:58.217712] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.868 [2024-12-16 21:21:58.249123] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.868 [2024-12-16 21:21:58.250077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.868 [2024-12-16 21:21:58.256658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.868 [2024-12-16 21:21:58.256898] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:08.868 [2024-12-16 21:21:58.256909] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:08.869 [2024-12-16 21:21:58.456701] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:08.869 [2024-12-16 21:21:58.458023] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:08.869 [2024-12-16 21:21:58.458054] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.869 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:09.127 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:09.386 ************************************ 00:16:09.386 END TEST test_create_multi_ublk 00:16:09.386 ************************************ 00:16:09.386 21:21:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:09.386 00:16:09.386 real 0m2.151s 00:16:09.386 user 0m0.826s 00:16:09.386 sys 0m0.132s 00:16:09.386 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.386 21:21:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.386 21:21:58 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:09.386 21:21:58 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:09.386 21:21:58 ublk -- ublk/ublk.sh@130 -- # killprocess 86525 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@954 -- # '[' -z 86525 ']' 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@958 -- # kill -0 86525 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@959 -- # uname 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86525 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.386 killing process with pid 86525 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86525' 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@973 -- # kill 86525 00:16:09.386 21:21:58 ublk -- common/autotest_common.sh@978 -- # wait 86525 00:16:09.645 [2024-12-16 21:21:59.137772] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:09.645 [2024-12-16 21:21:59.137852] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:09.906 00:16:09.906 real 0m19.118s 00:16:09.906 user 0m28.866s 00:16:09.906 sys 0m8.131s 00:16:09.907 21:21:59 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.907 21:21:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.907 ************************************ 00:16:09.907 END TEST ublk 00:16:09.907 ************************************ 00:16:09.907 21:21:59 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:09.907 21:21:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.907 21:21:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.907 21:21:59 -- common/autotest_common.sh@10 -- # set +x 00:16:09.907 ************************************ 00:16:09.907 START TEST ublk_recovery 00:16:09.907 ************************************ 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:09.907 * Looking for test storage... 00:16:09.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.907 21:21:59 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:09.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.907 --rc genhtml_branch_coverage=1 00:16:09.907 --rc genhtml_function_coverage=1 00:16:09.907 --rc genhtml_legend=1 00:16:09.907 --rc geninfo_all_blocks=1 00:16:09.907 --rc geninfo_unexecuted_blocks=1 00:16:09.907 00:16:09.907 ' 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:09.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.907 --rc genhtml_branch_coverage=1 00:16:09.907 --rc genhtml_function_coverage=1 00:16:09.907 --rc genhtml_legend=1 00:16:09.907 --rc geninfo_all_blocks=1 00:16:09.907 --rc geninfo_unexecuted_blocks=1 00:16:09.907 00:16:09.907 ' 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:09.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.907 --rc genhtml_branch_coverage=1 00:16:09.907 --rc genhtml_function_coverage=1 00:16:09.907 --rc genhtml_legend=1 00:16:09.907 --rc geninfo_all_blocks=1 00:16:09.907 --rc geninfo_unexecuted_blocks=1 00:16:09.907 00:16:09.907 ' 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:09.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.907 --rc genhtml_branch_coverage=1 00:16:09.907 --rc genhtml_function_coverage=1 00:16:09.907 --rc genhtml_legend=1 00:16:09.907 --rc geninfo_all_blocks=1 00:16:09.907 --rc geninfo_unexecuted_blocks=1 00:16:09.907 00:16:09.907 ' 00:16:09.907 21:21:59 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:09.907 21:21:59 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:09.907 21:21:59 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:09.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.907 21:21:59 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86891 00:16:09.907 21:21:59 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.907 21:21:59 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86891 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86891 ']' 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.907 21:21:59 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:09.907 21:21:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.166 [2024-12-16 21:21:59.657268] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:10.166 [2024-12-16 21:21:59.657385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86891 ] 00:16:10.166 [2024-12-16 21:21:59.796826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.166 [2024-12-16 21:21:59.821764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.166 [2024-12-16 21:21:59.821776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:11.100 21:22:00 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:11.100 [2024-12-16 21:22:00.490645] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:11.100 [2024-12-16 21:22:00.491894] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.100 21:22:00 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:11.100 malloc0 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.100 21:22:00 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:11.100 [2024-12-16 21:22:00.530740] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:11.100 [2024-12-16 21:22:00.530828] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:11.100 [2024-12-16 21:22:00.530835] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:11.100 [2024-12-16 21:22:00.530843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:11.100 [2024-12-16 21:22:00.539748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:11.100 [2024-12-16 21:22:00.539771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:11.100 [2024-12-16 21:22:00.546657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:11.100 [2024-12-16 21:22:00.546781] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:11.100 [2024-12-16 21:22:00.561652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:11.100 1 00:16:11.100 21:22:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.100 21:22:00 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:12.034 21:22:01 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86924 00:16:12.034 21:22:01 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:12.034 21:22:01 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:12.034 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.034 fio-3.35 00:16:12.034 Starting 1 process 00:16:17.302 21:22:06 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86891 00:16:17.302 21:22:06 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:22.587 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86891 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:22.587 21:22:11 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87029 00:16:22.587 21:22:11 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:22.587 21:22:11 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87029 00:16:22.587 21:22:11 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 87029 ']' 00:16:22.587 21:22:11 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:22.587 21:22:11 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.587 21:22:11 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.587 21:22:11 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.587 21:22:11 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.587 21:22:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.587 [2024-12-16 21:22:11.691507] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:22.587 [2024-12-16 21:22:11.692840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87029 ] 00:16:22.587 [2024-12-16 21:22:11.843410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:22.587 [2024-12-16 21:22:11.868649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.587 [2024-12-16 21:22:11.868659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.848 21:22:12 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.849 21:22:12 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:22.849 21:22:12 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:22.849 21:22:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.849 21:22:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.849 [2024-12-16 21:22:12.545647] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:22.849 [2024-12-16 21:22:12.546922] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:22.849 21:22:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.849 21:22:12 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:22.849 21:22:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.849 21:22:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.110 malloc0 00:16:23.110 21:22:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.110 21:22:12 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:23.110 21:22:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.110 21:22:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.110 [2024-12-16 21:22:12.585764] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:23.110 [2024-12-16 21:22:12.585801] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:23.110 [2024-12-16 21:22:12.585808] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:23.110 [2024-12-16 21:22:12.593684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:23.110 [2024-12-16 21:22:12.593701] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:23.110 1 00:16:23.110 21:22:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.110 21:22:12 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86924 00:16:24.051 [2024-12-16 21:22:13.593737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:24.051 [2024-12-16 21:22:13.600658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:24.051 [2024-12-16 21:22:13.600678] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:24.985 [2024-12-16 21:22:14.600698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:24.985 [2024-12-16 21:22:14.604671] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:24.985 [2024-12-16 21:22:14.604682] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:25.981 [2024-12-16 21:22:15.604705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:25.981 [2024-12-16 21:22:15.608638] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:25.981 [2024-12-16 21:22:15.608652] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:25.981 [2024-12-16 21:22:15.608658] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:25.981 [2024-12-16 21:22:15.608721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:47.902 [2024-12-16 21:22:36.872648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:47.902 [2024-12-16 21:22:36.878099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:47.902 [2024-12-16 21:22:36.884846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:47.903 [2024-12-16 21:22:36.884863] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:14.447 00:17:14.447 fio_test: (groupid=0, jobs=1): err= 0: pid=86927: Mon Dec 16 21:23:01 2024 00:17:14.447 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(3227MiB/60002msec) 00:17:14.447 slat (nsec): min=1284, max=307811, avg=5639.77, stdev=1778.58 00:17:14.447 clat (usec): min=893, max=30319k, avg=4526.68, stdev=262602.87 00:17:14.447 lat (usec): min=898, max=30319k, avg=4532.32, stdev=262602.87 00:17:14.447 clat percentiles (usec): 00:17:14.447 | 1.00th=[ 1844], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2073], 00:17:14.447 | 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2114], 60.00th=[ 2147], 00:17:14.447 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3195], 00:17:14.447 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 8455], 99.95th=[12256], 00:17:14.447 | 99.99th=[13173] 00:17:14.447 bw ( KiB/s): min=44424, max=116456, per=100.00%, avg=110285.90, stdev=12844.65, samples=59 00:17:14.448 iops : min=11106, max=29114, avg=27571.47, stdev=3211.16, samples=59 00:17:14.448 write: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(3223MiB/60002msec); 0 zone resets 00:17:14.448 slat (nsec): min=1327, max=261225, avg=5915.99, stdev=1751.43 00:17:14.448 clat (usec): min=870, max=30319k, avg=4762.60, stdev=271119.77 00:17:14.448 lat (usec): min=878, max=30319k, avg=4768.52, stdev=271119.77 00:17:14.448 clat percentiles (usec): 00:17:14.448 | 1.00th=[ 1876], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2180], 00:17:14.448 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2212], 60.00th=[ 2245], 00:17:14.448 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 3130], 00:17:14.448 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 8356], 99.95th=[12256], 00:17:14.448 | 99.99th=[13173] 00:17:14.448 bw ( KiB/s): min=45064, max=114608, per=100.00%, avg=110156.80, stdev=12678.41, samples=59 00:17:14.448 iops : min=11266, max=28652, avg=27539.19, stdev=3169.65, samples=59 00:17:14.448 lat (usec) : 1000=0.01% 00:17:14.448 lat (msec) : 2=3.49%, 4=93.63%, 10=2.82%, 20=0.05%, >=2000=0.01% 00:17:14.448 cpu : usr=3.12%, sys=16.35%, ctx=54314, majf=0, minf=13 00:17:14.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.448 issued rwts: total=826220,825128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.448 00:17:14.448 Run status group 0 (all jobs): 00:17:14.448 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=3227MiB (3384MB), run=60002-60002msec 00:17:14.448 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=3223MiB (3380MB), run=60002-60002msec 00:17:14.448 00:17:14.448 Disk stats (read/write): 00:17:14.448 ublkb1: ios=823030/822051, merge=0/0, ticks=3681762/3799453, in_queue=7481215, util=99.91% 00:17:14.448 21:23:01 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.448 [2024-12-16 21:23:01.825082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:14.448 [2024-12-16 21:23:01.861774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:14.448 [2024-12-16 21:23:01.861912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:14.448 [2024-12-16 21:23:01.868672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:14.448 [2024-12-16 21:23:01.868772] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:14.448 [2024-12-16 21:23:01.868781] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.448 21:23:01 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.448 [2024-12-16 21:23:01.882734] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:14.448 [2024-12-16 21:23:01.883982] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:14.448 [2024-12-16 21:23:01.884018] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.448 21:23:01 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:14.448 21:23:01 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:14.448 21:23:01 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87029 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 87029 ']' 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 87029 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87029 00:17:14.448 killing process with pid 87029 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87029' 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@973 -- # kill 87029 00:17:14.448 21:23:01 ublk_recovery -- common/autotest_common.sh@978 -- # wait 87029 00:17:14.448 [2024-12-16 21:23:02.152487] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:14.448 [2024-12-16 21:23:02.152537] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:14.448 ************************************ 00:17:14.448 END TEST ublk_recovery 00:17:14.448 ************************************ 00:17:14.448 00:17:14.448 real 1m3.073s 00:17:14.448 user 1m42.610s 00:17:14.448 sys 0m24.700s 00:17:14.448 21:23:02 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.448 21:23:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.448 21:23:02 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:14.448 21:23:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:14.448 21:23:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.448 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:17:14.448 21:23:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:14.448 21:23:02 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:14.448 21:23:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:14.448 21:23:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.448 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:17:14.448 ************************************ 00:17:14.448 START TEST ftl 00:17:14.448 ************************************ 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:14.448 * Looking for test storage... 00:17:14.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.448 21:23:02 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.448 21:23:02 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.448 21:23:02 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.448 21:23:02 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.448 21:23:02 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.448 21:23:02 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.448 21:23:02 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.448 21:23:02 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:14.448 21:23:02 ftl -- scripts/common.sh@345 -- # : 1 00:17:14.448 21:23:02 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.448 21:23:02 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.448 21:23:02 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:14.448 21:23:02 ftl -- scripts/common.sh@353 -- # local d=1 00:17:14.448 21:23:02 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.448 21:23:02 ftl -- scripts/common.sh@355 -- # echo 1 00:17:14.448 21:23:02 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.448 21:23:02 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@353 -- # local d=2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.448 21:23:02 ftl -- scripts/common.sh@355 -- # echo 2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.448 21:23:02 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.448 21:23:02 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.448 21:23:02 ftl -- scripts/common.sh@368 -- # return 0 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:14.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.448 --rc genhtml_branch_coverage=1 00:17:14.448 --rc genhtml_function_coverage=1 00:17:14.448 --rc genhtml_legend=1 00:17:14.448 --rc geninfo_all_blocks=1 00:17:14.448 --rc geninfo_unexecuted_blocks=1 00:17:14.448 00:17:14.448 ' 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:14.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.448 --rc genhtml_branch_coverage=1 00:17:14.448 --rc genhtml_function_coverage=1 00:17:14.448 --rc genhtml_legend=1 00:17:14.448 --rc geninfo_all_blocks=1 00:17:14.448 --rc geninfo_unexecuted_blocks=1 00:17:14.448 00:17:14.448 ' 00:17:14.448 21:23:02 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:14.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.448 --rc genhtml_branch_coverage=1 00:17:14.448 --rc genhtml_function_coverage=1 00:17:14.448 --rc genhtml_legend=1 00:17:14.448 --rc geninfo_all_blocks=1 00:17:14.448 --rc geninfo_unexecuted_blocks=1 00:17:14.448 00:17:14.449 ' 00:17:14.449 21:23:02 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.449 --rc genhtml_branch_coverage=1 00:17:14.449 --rc genhtml_function_coverage=1 00:17:14.449 --rc genhtml_legend=1 00:17:14.449 --rc geninfo_all_blocks=1 00:17:14.449 --rc geninfo_unexecuted_blocks=1 00:17:14.449 00:17:14.449 ' 00:17:14.449 21:23:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:14.449 21:23:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:14.449 21:23:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.449 21:23:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.449 21:23:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:14.449 21:23:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:14.449 21:23:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.449 21:23:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:14.449 21:23:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:14.449 21:23:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.449 21:23:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.449 21:23:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:14.449 21:23:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:14.449 21:23:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.449 21:23:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.449 21:23:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:14.449 21:23:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:14.449 21:23:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.449 21:23:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.449 21:23:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:14.449 21:23:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:14.449 21:23:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.449 21:23:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.449 21:23:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.449 21:23:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.449 21:23:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:14.449 21:23:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:14.449 21:23:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.449 21:23:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.449 21:23:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.449 21:23:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:14.449 21:23:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:14.449 21:23:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:14.449 21:23:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:14.449 21:23:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:14.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:14.449 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.449 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.449 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.449 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.449 21:23:03 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87832 00:17:14.449 21:23:03 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:14.449 21:23:03 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87832 00:17:14.449 21:23:03 ftl -- common/autotest_common.sh@835 -- # '[' -z 87832 ']' 00:17:14.449 21:23:03 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.449 21:23:03 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.449 21:23:03 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.449 21:23:03 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.449 21:23:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:14.449 [2024-12-16 21:23:03.316336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:14.449 [2024-12-16 21:23:03.316665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87832 ] 00:17:14.449 [2024-12-16 21:23:03.459561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.449 [2024-12-16 21:23:03.484177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.449 21:23:04 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.449 21:23:04 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:14.449 21:23:04 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:14.710 21:23:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:15.282 21:23:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:15.282 21:23:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:15.541 21:23:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:15.541 21:23:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:15.541 21:23:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@50 -- # break 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:15.799 21:23:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:16.056 21:23:05 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:16.056 21:23:05 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:16.056 21:23:05 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:16.056 21:23:05 ftl -- ftl/ftl.sh@63 -- # break 00:17:16.056 21:23:05 ftl -- ftl/ftl.sh@66 -- # killprocess 87832 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 87832 ']' 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@958 -- # kill -0 87832 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@959 -- # uname 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87832 00:17:16.057 killing process with pid 87832 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87832' 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@973 -- # kill 87832 00:17:16.057 21:23:05 ftl -- common/autotest_common.sh@978 -- # wait 87832 00:17:16.316 21:23:05 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:16.316 21:23:05 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:16.316 21:23:05 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:16.316 21:23:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.316 21:23:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:16.316 ************************************ 00:17:16.316 START TEST ftl_fio_basic 00:17:16.316 ************************************ 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:16.316 * Looking for test storage... 00:17:16.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.316 21:23:05 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:16.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.316 --rc genhtml_branch_coverage=1 00:17:16.316 --rc genhtml_function_coverage=1 00:17:16.316 --rc genhtml_legend=1 00:17:16.316 --rc geninfo_all_blocks=1 00:17:16.316 --rc geninfo_unexecuted_blocks=1 00:17:16.316 00:17:16.316 ' 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:16.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.316 --rc genhtml_branch_coverage=1 00:17:16.316 --rc genhtml_function_coverage=1 00:17:16.316 --rc genhtml_legend=1 00:17:16.316 --rc geninfo_all_blocks=1 00:17:16.316 --rc geninfo_unexecuted_blocks=1 00:17:16.316 00:17:16.316 ' 00:17:16.316 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:16.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.316 --rc genhtml_branch_coverage=1 00:17:16.316 --rc genhtml_function_coverage=1 00:17:16.316 --rc genhtml_legend=1 00:17:16.316 --rc geninfo_all_blocks=1 00:17:16.316 --rc geninfo_unexecuted_blocks=1 00:17:16.316 00:17:16.316 ' 00:17:16.317 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:16.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.317 --rc genhtml_branch_coverage=1 00:17:16.317 --rc genhtml_function_coverage=1 00:17:16.317 --rc genhtml_legend=1 00:17:16.317 --rc geninfo_all_blocks=1 00:17:16.317 --rc geninfo_unexecuted_blocks=1 00:17:16.317 00:17:16.317 ' 00:17:16.317 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:16.317 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:16.317 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.575 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.575 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:16.575 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:16.575 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87953 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87953 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 87953 ']' 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.576 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:16.576 [2024-12-16 21:23:06.108988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:16.576 [2024-12-16 21:23:06.109257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87953 ] 00:17:16.576 [2024-12-16 21:23:06.241284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:16.576 [2024-12-16 21:23:06.259028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.576 [2024-12-16 21:23:06.259259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.576 [2024-12-16 21:23:06.259339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:17.509 21:23:06 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:17.509 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:17.766 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:17.766 { 00:17:17.766 "name": "nvme0n1", 00:17:17.766 "aliases": [ 00:17:17.766 "1fc5e55e-e21f-469d-b1c7-299bade21b28" 00:17:17.766 ], 00:17:17.766 "product_name": "NVMe disk", 00:17:17.766 "block_size": 4096, 00:17:17.766 "num_blocks": 1310720, 00:17:17.766 "uuid": "1fc5e55e-e21f-469d-b1c7-299bade21b28", 00:17:17.766 "numa_id": -1, 00:17:17.766 "assigned_rate_limits": { 00:17:17.766 "rw_ios_per_sec": 0, 00:17:17.766 "rw_mbytes_per_sec": 0, 00:17:17.766 "r_mbytes_per_sec": 0, 00:17:17.767 "w_mbytes_per_sec": 0 00:17:17.767 }, 00:17:17.767 "claimed": false, 00:17:17.767 "zoned": false, 00:17:17.767 "supported_io_types": { 00:17:17.767 "read": true, 00:17:17.767 "write": true, 00:17:17.767 "unmap": true, 00:17:17.767 "flush": true, 00:17:17.767 "reset": true, 00:17:17.767 "nvme_admin": true, 00:17:17.767 "nvme_io": true, 00:17:17.767 "nvme_io_md": false, 00:17:17.767 "write_zeroes": true, 00:17:17.767 "zcopy": false, 00:17:17.767 "get_zone_info": false, 00:17:17.767 "zone_management": false, 00:17:17.767 "zone_append": false, 00:17:17.767 "compare": true, 00:17:17.767 "compare_and_write": false, 00:17:17.767 "abort": true, 00:17:17.767 "seek_hole": false, 00:17:17.767 "seek_data": false, 00:17:17.767 "copy": true, 00:17:17.767 "nvme_iov_md": false 00:17:17.767 }, 00:17:17.767 "driver_specific": { 00:17:17.767 "nvme": [ 00:17:17.767 { 00:17:17.767 "pci_address": "0000:00:11.0", 00:17:17.767 "trid": { 00:17:17.767 "trtype": "PCIe", 00:17:17.767 "traddr": "0000:00:11.0" 00:17:17.767 }, 00:17:17.767 "ctrlr_data": { 00:17:17.767 "cntlid": 0, 00:17:17.767 "vendor_id": "0x1b36", 00:17:17.767 "model_number": "QEMU NVMe Ctrl", 00:17:17.767 "serial_number": "12341", 00:17:17.767 "firmware_revision": "8.0.0", 00:17:17.767 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:17.767 "oacs": { 00:17:17.767 "security": 0, 00:17:17.767 "format": 1, 00:17:17.767 "firmware": 0, 00:17:17.767 "ns_manage": 1 00:17:17.767 }, 00:17:17.767 "multi_ctrlr": false, 00:17:17.767 "ana_reporting": false 00:17:17.767 }, 00:17:17.767 "vs": { 00:17:17.767 "nvme_version": "1.4" 00:17:17.767 }, 00:17:17.767 "ns_data": { 00:17:17.767 "id": 1, 00:17:17.767 "can_share": false 00:17:17.767 } 00:17:17.767 } 00:17:17.767 ], 00:17:17.767 "mp_policy": "active_passive" 00:17:17.767 } 00:17:17.767 } 00:17:17.767 ]' 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:17.767 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:18.025 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:18.025 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:18.283 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=8d990f6d-8029-420c-9a85-ea04aca19e5d 00:17:18.283 21:23:07 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8d990f6d-8029-420c-9a85-ea04aca19e5d 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:18.541 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:18.799 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:18.799 { 00:17:18.799 "name": "20a29852-9d1b-4b5c-bd63-abd50253aa12", 00:17:18.799 "aliases": [ 00:17:18.799 "lvs/nvme0n1p0" 00:17:18.799 ], 00:17:18.799 "product_name": "Logical Volume", 00:17:18.799 "block_size": 4096, 00:17:18.799 "num_blocks": 26476544, 00:17:18.799 "uuid": "20a29852-9d1b-4b5c-bd63-abd50253aa12", 00:17:18.799 "assigned_rate_limits": { 00:17:18.799 "rw_ios_per_sec": 0, 00:17:18.799 "rw_mbytes_per_sec": 0, 00:17:18.799 "r_mbytes_per_sec": 0, 00:17:18.799 "w_mbytes_per_sec": 0 00:17:18.799 }, 00:17:18.799 "claimed": false, 00:17:18.799 "zoned": false, 00:17:18.799 "supported_io_types": { 00:17:18.799 "read": true, 00:17:18.799 "write": true, 00:17:18.799 "unmap": true, 00:17:18.799 "flush": false, 00:17:18.799 "reset": true, 00:17:18.799 "nvme_admin": false, 00:17:18.799 "nvme_io": false, 00:17:18.799 "nvme_io_md": false, 00:17:18.799 "write_zeroes": true, 00:17:18.799 "zcopy": false, 00:17:18.799 "get_zone_info": false, 00:17:18.799 "zone_management": false, 00:17:18.799 "zone_append": false, 00:17:18.799 "compare": false, 00:17:18.799 "compare_and_write": false, 00:17:18.799 "abort": false, 00:17:18.799 "seek_hole": true, 00:17:18.799 "seek_data": true, 00:17:18.799 "copy": false, 00:17:18.799 "nvme_iov_md": false 00:17:18.799 }, 00:17:18.799 "driver_specific": { 00:17:18.799 "lvol": { 00:17:18.799 "lvol_store_uuid": "8d990f6d-8029-420c-9a85-ea04aca19e5d", 00:17:18.799 "base_bdev": "nvme0n1", 00:17:18.799 "thin_provision": true, 00:17:18.799 "num_allocated_clusters": 0, 00:17:18.799 "snapshot": false, 00:17:18.799 "clone": false, 00:17:18.799 "esnap_clone": false 00:17:18.799 } 00:17:18.799 } 00:17:18.799 } 00:17:18.799 ]' 00:17:18.799 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:18.800 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:19.057 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:19.057 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:19.057 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:19.057 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:19.057 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:19.057 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:19.057 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:19.058 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:19.315 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:19.315 { 00:17:19.315 "name": "20a29852-9d1b-4b5c-bd63-abd50253aa12", 00:17:19.315 "aliases": [ 00:17:19.315 "lvs/nvme0n1p0" 00:17:19.315 ], 00:17:19.315 "product_name": "Logical Volume", 00:17:19.316 "block_size": 4096, 00:17:19.316 "num_blocks": 26476544, 00:17:19.316 "uuid": "20a29852-9d1b-4b5c-bd63-abd50253aa12", 00:17:19.316 "assigned_rate_limits": { 00:17:19.316 "rw_ios_per_sec": 0, 00:17:19.316 "rw_mbytes_per_sec": 0, 00:17:19.316 "r_mbytes_per_sec": 0, 00:17:19.316 "w_mbytes_per_sec": 0 00:17:19.316 }, 00:17:19.316 "claimed": false, 00:17:19.316 "zoned": false, 00:17:19.316 "supported_io_types": { 00:17:19.316 "read": true, 00:17:19.316 "write": true, 00:17:19.316 "unmap": true, 00:17:19.316 "flush": false, 00:17:19.316 "reset": true, 00:17:19.316 "nvme_admin": false, 00:17:19.316 "nvme_io": false, 00:17:19.316 "nvme_io_md": false, 00:17:19.316 "write_zeroes": true, 00:17:19.316 "zcopy": false, 00:17:19.316 "get_zone_info": false, 00:17:19.316 "zone_management": false, 00:17:19.316 "zone_append": false, 00:17:19.316 "compare": false, 00:17:19.316 "compare_and_write": false, 00:17:19.316 "abort": false, 00:17:19.316 "seek_hole": true, 00:17:19.316 "seek_data": true, 00:17:19.316 "copy": false, 00:17:19.316 "nvme_iov_md": false 00:17:19.316 }, 00:17:19.316 "driver_specific": { 00:17:19.316 "lvol": { 00:17:19.316 "lvol_store_uuid": "8d990f6d-8029-420c-9a85-ea04aca19e5d", 00:17:19.316 "base_bdev": "nvme0n1", 00:17:19.316 "thin_provision": true, 00:17:19.316 "num_allocated_clusters": 0, 00:17:19.316 "snapshot": false, 00:17:19.316 "clone": false, 00:17:19.316 "esnap_clone": false 00:17:19.316 } 00:17:19.316 } 00:17:19.316 } 00:17:19.316 ]' 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:19.316 21:23:08 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:19.573 21:23:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:19.574 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20a29852-9d1b-4b5c-bd63-abd50253aa12 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:19.574 { 00:17:19.574 "name": "20a29852-9d1b-4b5c-bd63-abd50253aa12", 00:17:19.574 "aliases": [ 00:17:19.574 "lvs/nvme0n1p0" 00:17:19.574 ], 00:17:19.574 "product_name": "Logical Volume", 00:17:19.574 "block_size": 4096, 00:17:19.574 "num_blocks": 26476544, 00:17:19.574 "uuid": "20a29852-9d1b-4b5c-bd63-abd50253aa12", 00:17:19.574 "assigned_rate_limits": { 00:17:19.574 "rw_ios_per_sec": 0, 00:17:19.574 "rw_mbytes_per_sec": 0, 00:17:19.574 "r_mbytes_per_sec": 0, 00:17:19.574 "w_mbytes_per_sec": 0 00:17:19.574 }, 00:17:19.574 "claimed": false, 00:17:19.574 "zoned": false, 00:17:19.574 "supported_io_types": { 00:17:19.574 "read": true, 00:17:19.574 "write": true, 00:17:19.574 "unmap": true, 00:17:19.574 "flush": false, 00:17:19.574 "reset": true, 00:17:19.574 "nvme_admin": false, 00:17:19.574 "nvme_io": false, 00:17:19.574 "nvme_io_md": false, 00:17:19.574 "write_zeroes": true, 00:17:19.574 "zcopy": false, 00:17:19.574 "get_zone_info": false, 00:17:19.574 "zone_management": false, 00:17:19.574 "zone_append": false, 00:17:19.574 "compare": false, 00:17:19.574 "compare_and_write": false, 00:17:19.574 "abort": false, 00:17:19.574 "seek_hole": true, 00:17:19.574 "seek_data": true, 00:17:19.574 "copy": false, 00:17:19.574 "nvme_iov_md": false 00:17:19.574 }, 00:17:19.574 "driver_specific": { 00:17:19.574 "lvol": { 00:17:19.574 "lvol_store_uuid": "8d990f6d-8029-420c-9a85-ea04aca19e5d", 00:17:19.574 "base_bdev": "nvme0n1", 00:17:19.574 "thin_provision": true, 00:17:19.574 "num_allocated_clusters": 0, 00:17:19.574 "snapshot": false, 00:17:19.574 "clone": false, 00:17:19.574 "esnap_clone": false 00:17:19.574 } 00:17:19.574 } 00:17:19.574 } 00:17:19.574 ]' 00:17:19.574 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:19.833 21:23:09 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 20a29852-9d1b-4b5c-bd63-abd50253aa12 -c nvc0n1p0 --l2p_dram_limit 60 00:17:19.833 [2024-12-16 21:23:09.494768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.494888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.833 [2024-12-16 21:23:09.494904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.833 [2024-12-16 21:23:09.494913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.494965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.494974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.833 [2024-12-16 21:23:09.494980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:19.833 [2024-12-16 21:23:09.494990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.495016] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.833 [2024-12-16 21:23:09.495237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.833 [2024-12-16 21:23:09.495248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.495256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.833 [2024-12-16 21:23:09.495262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:17:19.833 [2024-12-16 21:23:09.495269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.495323] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 813eb349-99cf-4639-99f1-caf42d52ad90 00:17:19.833 [2024-12-16 21:23:09.496299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.496323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:19.833 [2024-12-16 21:23:09.496332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:19.833 [2024-12-16 21:23:09.496338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.501394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.501417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.833 [2024-12-16 21:23:09.501427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:17:19.833 [2024-12-16 21:23:09.501434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.501511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.501518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.833 [2024-12-16 21:23:09.501528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:19.833 [2024-12-16 21:23:09.501534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.501579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.501589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.833 [2024-12-16 21:23:09.501596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:19.833 [2024-12-16 21:23:09.501601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.501652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.833 [2024-12-16 21:23:09.502943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.502967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.833 [2024-12-16 21:23:09.502975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:17:19.833 [2024-12-16 21:23:09.502990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.503023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.503031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.833 [2024-12-16 21:23:09.503037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:19.833 [2024-12-16 21:23:09.503046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.503074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:19.833 [2024-12-16 21:23:09.503181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:19.833 [2024-12-16 21:23:09.503190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.833 [2024-12-16 21:23:09.503200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:19.833 [2024-12-16 21:23:09.503208] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.833 [2024-12-16 21:23:09.503217] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.833 [2024-12-16 21:23:09.503223] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:19.833 [2024-12-16 21:23:09.503230] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.833 [2024-12-16 21:23:09.503235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:19.833 [2024-12-16 21:23:09.503250] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:19.833 [2024-12-16 21:23:09.503256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.503264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.833 [2024-12-16 21:23:09.503269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:17:19.833 [2024-12-16 21:23:09.503276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.503351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-12-16 21:23:09.503360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.833 [2024-12-16 21:23:09.503367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:19.833 [2024-12-16 21:23:09.503374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-12-16 21:23:09.503462] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.833 [2024-12-16 21:23:09.503471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.833 [2024-12-16 21:23:09.503476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.833 [2024-12-16 21:23:09.503484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.833 [2024-12-16 21:23:09.503490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.833 [2024-12-16 21:23:09.503498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.833 [2024-12-16 21:23:09.503503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:19.833 [2024-12-16 21:23:09.503510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.833 [2024-12-16 21:23:09.503515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:19.833 [2024-12-16 21:23:09.503521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.833 [2024-12-16 21:23:09.503526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.833 [2024-12-16 21:23:09.503534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:19.833 [2024-12-16 21:23:09.503539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.833 [2024-12-16 21:23:09.503547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.833 [2024-12-16 21:23:09.503552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:19.834 [2024-12-16 21:23:09.503559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.834 [2024-12-16 21:23:09.503573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:19.834 [2024-12-16 21:23:09.503578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.834 [2024-12-16 21:23:09.503591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.834 [2024-12-16 21:23:09.503604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.834 [2024-12-16 21:23:09.503612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.834 [2024-12-16 21:23:09.503634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.834 [2024-12-16 21:23:09.503641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.834 [2024-12-16 21:23:09.503653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.834 [2024-12-16 21:23:09.503662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.834 [2024-12-16 21:23:09.503676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.834 [2024-12-16 21:23:09.503681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.834 [2024-12-16 21:23:09.503694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.834 [2024-12-16 21:23:09.503702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:19.834 [2024-12-16 21:23:09.503707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.834 [2024-12-16 21:23:09.503714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:19.834 [2024-12-16 21:23:09.503719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:19.834 [2024-12-16 21:23:09.503726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:19.834 [2024-12-16 21:23:09.503739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:19.834 [2024-12-16 21:23:09.503745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503752] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.834 [2024-12-16 21:23:09.503758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.834 [2024-12-16 21:23:09.503767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.834 [2024-12-16 21:23:09.503775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.834 [2024-12-16 21:23:09.503783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.834 [2024-12-16 21:23:09.503789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.834 [2024-12-16 21:23:09.503798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.834 [2024-12-16 21:23:09.503804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.834 [2024-12-16 21:23:09.503811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.834 [2024-12-16 21:23:09.503817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.834 [2024-12-16 21:23:09.503826] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.834 [2024-12-16 21:23:09.503833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.834 [2024-12-16 21:23:09.503842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:19.834 [2024-12-16 21:23:09.503848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:19.834 [2024-12-16 21:23:09.503857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:19.834 [2024-12-16 21:23:09.503863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:19.834 [2024-12-16 21:23:09.503870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:19.834 [2024-12-16 21:23:09.503876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:19.834 [2024-12-16 21:23:09.503885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:19.834 [2024-12-16 21:23:09.503892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:19.834 [2024-12-16 21:23:09.503899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:19.834 [2024-12-16 21:23:09.503905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:19.834 [2024-12-16 21:23:09.503913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:19.834 [2024-12-16 21:23:09.503919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:19.834 [2024-12-16 21:23:09.503926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:19.834 [2024-12-16 21:23:09.503933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:19.834 [2024-12-16 21:23:09.503940] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.834 [2024-12-16 21:23:09.503947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.834 [2024-12-16 21:23:09.503955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.834 [2024-12-16 21:23:09.503961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.834 [2024-12-16 21:23:09.503968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.834 [2024-12-16 21:23:09.503974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.834 [2024-12-16 21:23:09.503991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.834 [2024-12-16 21:23:09.503996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.834 [2024-12-16 21:23:09.504012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:17:19.834 [2024-12-16 21:23:09.504017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.834 [2024-12-16 21:23:09.504071] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:19.834 [2024-12-16 21:23:09.504079] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:23.115 [2024-12-16 21:23:12.478036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.478090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:23.116 [2024-12-16 21:23:12.478118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2973.949 ms 00:17:23.116 [2024-12-16 21:23:12.478126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.486748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.486785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.116 [2024-12-16 21:23:12.486813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.537 ms 00:17:23.116 [2024-12-16 21:23:12.486821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.486919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.486927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.116 [2024-12-16 21:23:12.486938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:23.116 [2024-12-16 21:23:12.486946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.506069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.506126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.116 [2024-12-16 21:23:12.506151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.058 ms 00:17:23.116 [2024-12-16 21:23:12.506163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.506224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.506238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.116 [2024-12-16 21:23:12.506254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.116 [2024-12-16 21:23:12.506266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.506732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.506755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.116 [2024-12-16 21:23:12.506773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:17:23.116 [2024-12-16 21:23:12.506805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.507000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.507034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.116 [2024-12-16 21:23:12.507050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:23.116 [2024-12-16 21:23:12.507063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.514056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.514099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.116 [2024-12-16 21:23:12.514116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:17:23.116 [2024-12-16 21:23:12.514131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.523043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.116 [2024-12-16 21:23:12.537466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.537640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.116 [2024-12-16 21:23:12.537657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.216 ms 00:17:23.116 [2024-12-16 21:23:12.537667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.587737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.587784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:23.116 [2024-12-16 21:23:12.587797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.025 ms 00:17:23.116 [2024-12-16 21:23:12.587810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.587989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.588005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.116 [2024-12-16 21:23:12.588014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:23.116 [2024-12-16 21:23:12.588023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.590832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.590872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:23.116 [2024-12-16 21:23:12.590882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:17:23.116 [2024-12-16 21:23:12.590891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.593230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.593370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:23.116 [2024-12-16 21:23:12.593387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:17:23.116 [2024-12-16 21:23:12.593397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.593715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.593733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.116 [2024-12-16 21:23:12.593742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:23.116 [2024-12-16 21:23:12.593753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.619825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.619955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:23.116 [2024-12-16 21:23:12.619971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.043 ms 00:17:23.116 [2024-12-16 21:23:12.619980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.623872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.623909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:23.116 [2024-12-16 21:23:12.623920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.821 ms 00:17:23.116 [2024-12-16 21:23:12.623930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.626743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.626866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:23.116 [2024-12-16 21:23:12.626879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:17:23.116 [2024-12-16 21:23:12.626888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.630463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.630590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.116 [2024-12-16 21:23:12.630605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.537 ms 00:17:23.116 [2024-12-16 21:23:12.630616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.630673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.630685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.116 [2024-12-16 21:23:12.630694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.116 [2024-12-16 21:23:12.630714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.630786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.116 [2024-12-16 21:23:12.630798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.116 [2024-12-16 21:23:12.630808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:23.116 [2024-12-16 21:23:12.630816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.116 [2024-12-16 21:23:12.631709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3136.510 ms, result 0 00:17:23.116 { 00:17:23.116 "name": "ftl0", 00:17:23.116 "uuid": "813eb349-99cf-4639-99f1-caf42d52ad90" 00:17:23.116 } 00:17:23.116 21:23:12 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:23.116 21:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:23.116 21:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:23.116 21:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:23.116 21:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:23.116 21:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:23.116 21:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:23.375 21:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:23.375 [ 00:17:23.375 { 00:17:23.375 "name": "ftl0", 00:17:23.375 "aliases": [ 00:17:23.375 "813eb349-99cf-4639-99f1-caf42d52ad90" 00:17:23.375 ], 00:17:23.375 "product_name": "FTL disk", 00:17:23.375 "block_size": 4096, 00:17:23.375 "num_blocks": 20971520, 00:17:23.375 "uuid": "813eb349-99cf-4639-99f1-caf42d52ad90", 00:17:23.375 "assigned_rate_limits": { 00:17:23.375 "rw_ios_per_sec": 0, 00:17:23.375 "rw_mbytes_per_sec": 0, 00:17:23.375 "r_mbytes_per_sec": 0, 00:17:23.375 "w_mbytes_per_sec": 0 00:17:23.375 }, 00:17:23.375 "claimed": false, 00:17:23.375 "zoned": false, 00:17:23.375 "supported_io_types": { 00:17:23.375 "read": true, 00:17:23.375 "write": true, 00:17:23.375 "unmap": true, 00:17:23.375 "flush": true, 00:17:23.375 "reset": false, 00:17:23.375 "nvme_admin": false, 00:17:23.375 "nvme_io": false, 00:17:23.375 "nvme_io_md": false, 00:17:23.375 "write_zeroes": true, 00:17:23.375 "zcopy": false, 00:17:23.375 "get_zone_info": false, 00:17:23.375 "zone_management": false, 00:17:23.375 "zone_append": false, 00:17:23.375 "compare": false, 00:17:23.375 "compare_and_write": false, 00:17:23.375 "abort": false, 00:17:23.375 "seek_hole": false, 00:17:23.375 "seek_data": false, 00:17:23.375 "copy": false, 00:17:23.375 "nvme_iov_md": false 00:17:23.375 }, 00:17:23.375 "driver_specific": { 00:17:23.375 "ftl": { 00:17:23.375 "base_bdev": "20a29852-9d1b-4b5c-bd63-abd50253aa12", 00:17:23.375 "cache": "nvc0n1p0" 00:17:23.375 } 00:17:23.375 } 00:17:23.375 } 00:17:23.375 ] 00:17:23.375 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:23.375 21:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:23.375 21:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:23.633 21:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:23.633 21:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:23.892 [2024-12-16 21:23:13.415750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.415802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.892 [2024-12-16 21:23:13.415816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:23.892 [2024-12-16 21:23:13.415824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.415856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.892 [2024-12-16 21:23:13.416322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.416346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.892 [2024-12-16 21:23:13.416357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:17:23.892 [2024-12-16 21:23:13.416366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.416840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.416858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.892 [2024-12-16 21:23:13.416878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:17:23.892 [2024-12-16 21:23:13.416896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.420146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.420169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.892 [2024-12-16 21:23:13.420179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:17:23.892 [2024-12-16 21:23:13.420191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.426420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.426450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.892 [2024-12-16 21:23:13.426461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.199 ms 00:17:23.892 [2024-12-16 21:23:13.426470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.428120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.428163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.892 [2024-12-16 21:23:13.428172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:17:23.892 [2024-12-16 21:23:13.428180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.432624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.432673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.892 [2024-12-16 21:23:13.432685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.406 ms 00:17:23.892 [2024-12-16 21:23:13.432694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.432852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.432880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.892 [2024-12-16 21:23:13.432888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:23.892 [2024-12-16 21:23:13.432897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.434770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.434805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.892 [2024-12-16 21:23:13.434813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:17:23.892 [2024-12-16 21:23:13.434822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.436089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.436126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.892 [2024-12-16 21:23:13.436135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:17:23.892 [2024-12-16 21:23:13.436144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.437068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.437105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.892 [2024-12-16 21:23:13.437113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:17:23.892 [2024-12-16 21:23:13.437122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.438063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.892 [2024-12-16 21:23:13.438097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.892 [2024-12-16 21:23:13.438106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:17:23.892 [2024-12-16 21:23:13.438114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.892 [2024-12-16 21:23:13.438154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.892 [2024-12-16 21:23:13.438169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.892 [2024-12-16 21:23:13.438179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.892 [2024-12-16 21:23:13.438188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.892 [2024-12-16 21:23:13.438196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.893 [2024-12-16 21:23:13.438982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.894 [2024-12-16 21:23:13.438991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.894 [2024-12-16 21:23:13.439000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.894 [2024-12-16 21:23:13.439010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.894 [2024-12-16 21:23:13.439017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.894 [2024-12-16 21:23:13.439035] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.894 [2024-12-16 21:23:13.439042] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 813eb349-99cf-4639-99f1-caf42d52ad90 00:17:23.894 [2024-12-16 21:23:13.439053] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.894 [2024-12-16 21:23:13.439060] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.894 [2024-12-16 21:23:13.439068] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.894 [2024-12-16 21:23:13.439075] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.894 [2024-12-16 21:23:13.439084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.894 [2024-12-16 21:23:13.439091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.894 [2024-12-16 21:23:13.439099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.894 [2024-12-16 21:23:13.439106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.894 [2024-12-16 21:23:13.439113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.894 [2024-12-16 21:23:13.439120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.894 [2024-12-16 21:23:13.439129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.894 [2024-12-16 21:23:13.439137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:17:23.894 [2024-12-16 21:23:13.439145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.440724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.894 [2024-12-16 21:23:13.440754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.894 [2024-12-16 21:23:13.440763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:17:23.894 [2024-12-16 21:23:13.440772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.440873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.894 [2024-12-16 21:23:13.440883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.894 [2024-12-16 21:23:13.440892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:23.894 [2024-12-16 21:23:13.440902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.446355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.446390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.894 [2024-12-16 21:23:13.446399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.446409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.446464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.446474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.894 [2024-12-16 21:23:13.446482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.446493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.446563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.446576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.894 [2024-12-16 21:23:13.446584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.446593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.446618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.446640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.894 [2024-12-16 21:23:13.446648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.446657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.456382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.456423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.894 [2024-12-16 21:23:13.456432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.456442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.464394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.464435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.894 [2024-12-16 21:23:13.464456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.464466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.464540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.464554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.894 [2024-12-16 21:23:13.464562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.464570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.464622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.464645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.894 [2024-12-16 21:23:13.464653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.464662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.464746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.464757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.894 [2024-12-16 21:23:13.464765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.464774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.464823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.464834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.894 [2024-12-16 21:23:13.464842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.464850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.464904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.464943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.894 [2024-12-16 21:23:13.464951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.464968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.465016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.894 [2024-12-16 21:23:13.465028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.894 [2024-12-16 21:23:13.465037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.894 [2024-12-16 21:23:13.465047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.894 [2024-12-16 21:23:13.465206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.441 ms, result 0 00:17:23.894 true 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87953 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 87953 ']' 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 87953 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87953 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.894 killing process with pid 87953 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87953' 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 87953 00:17:23.894 21:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 87953 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:29.217 21:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:29.217 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:29.217 fio-3.35 00:17:29.217 Starting 1 thread 00:17:34.498 00:17:34.498 test: (groupid=0, jobs=1): err= 0: pid=88115: Mon Dec 16 21:23:23 2024 00:17:34.498 read: IOPS=752, BW=50.0MiB/s (52.4MB/s)(255MiB/5091msec) 00:17:34.498 slat (nsec): min=3052, max=51495, avg=5403.71, stdev=3157.40 00:17:34.498 clat (usec): min=281, max=2502, avg=603.03, stdev=208.61 00:17:34.498 lat (usec): min=286, max=2507, avg=608.44, stdev=209.47 00:17:34.498 clat percentiles (usec): 00:17:34.498 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 416], 00:17:34.498 | 30.00th=[ 478], 40.00th=[ 519], 50.00th=[ 537], 60.00th=[ 594], 00:17:34.498 | 70.00th=[ 701], 80.00th=[ 840], 90.00th=[ 898], 95.00th=[ 947], 00:17:34.498 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1319], 99.95th=[ 1500], 00:17:34.498 | 99.99th=[ 2507] 00:17:34.498 write: IOPS=758, BW=50.4MiB/s (52.8MB/s)(256MiB/5083msec); 0 zone resets 00:17:34.498 slat (usec): min=13, max=100, avg=21.23, stdev= 7.11 00:17:34.498 clat (usec): min=322, max=2443, avg=679.71, stdev=225.01 00:17:34.498 lat (usec): min=343, max=2462, avg=700.94, stdev=226.57 00:17:34.498 clat percentiles (usec): 00:17:34.498 | 1.00th=[ 343], 5.00th=[ 363], 10.00th=[ 412], 20.00th=[ 486], 00:17:34.498 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 652], 00:17:34.498 | 70.00th=[ 807], 80.00th=[ 898], 90.00th=[ 979], 95.00th=[ 1029], 00:17:34.498 | 99.00th=[ 1287], 99.50th=[ 1549], 99.90th=[ 1827], 99.95th=[ 1991], 00:17:34.498 | 99.99th=[ 2442] 00:17:34.498 bw ( KiB/s): min=35360, max=68272, per=100.00%, avg=51874.40, stdev=11051.78, samples=10 00:17:34.498 iops : min= 520, max= 1004, avg=762.80, stdev=162.55, samples=10 00:17:34.498 lat (usec) : 500=28.13%, 750=41.53%, 1000=25.30% 00:17:34.498 lat (msec) : 2=5.02%, 4=0.03% 00:17:34.498 cpu : usr=98.68%, sys=0.28%, ctx=12, majf=0, minf=1324 00:17:34.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:34.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.498 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:34.498 00:17:34.498 Run status group 0 (all jobs): 00:17:34.498 READ: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=255MiB (267MB), run=5091-5091msec 00:17:34.498 WRITE: bw=50.4MiB/s (52.8MB/s), 50.4MiB/s-50.4MiB/s (52.8MB/s-52.8MB/s), io=256MiB (269MB), run=5083-5083msec 00:17:35.069 ----------------------------------------------------- 00:17:35.069 Suppressions used: 00:17:35.069 count bytes template 00:17:35.069 1 5 /usr/src/fio/parse.c 00:17:35.069 1 8 libtcmalloc_minimal.so 00:17:35.069 1 904 libcrypto.so 00:17:35.069 ----------------------------------------------------- 00:17:35.069 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:35.069 21:23:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.330 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:35.330 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:35.330 fio-3.35 00:17:35.330 Starting 2 threads 00:18:01.883 00:18:01.883 first_half: (groupid=0, jobs=1): err= 0: pid=88223: Mon Dec 16 21:23:49 2024 00:18:01.883 read: IOPS=2723, BW=10.6MiB/s (11.2MB/s)(255MiB/23957msec) 00:18:01.883 slat (nsec): min=3073, max=30164, avg=3992.72, stdev=883.57 00:18:01.883 clat (usec): min=598, max=263103, avg=36848.91, stdev=20942.47 00:18:01.884 lat (usec): min=602, max=263107, avg=36852.90, stdev=20942.52 00:18:01.884 clat percentiles (msec): 00:18:01.884 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:18:01.884 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:18:01.884 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 59], 00:18:01.884 | 99.00th=[ 155], 99.50th=[ 180], 99.90th=[ 211], 99.95th=[ 228], 00:18:01.884 | 99.99th=[ 253] 00:18:01.884 write: IOPS=3311, BW=12.9MiB/s (13.6MB/s)(256MiB/19792msec); 0 zone resets 00:18:01.884 slat (usec): min=3, max=2027, avg= 5.59, stdev= 9.42 00:18:01.884 clat (usec): min=375, max=98277, avg=10087.91, stdev=16439.34 00:18:01.884 lat (usec): min=382, max=98282, avg=10093.50, stdev=16439.41 00:18:01.884 clat percentiles (usec): 00:18:01.884 | 1.00th=[ 783], 5.00th=[ 1029], 10.00th=[ 1205], 20.00th=[ 1500], 00:18:01.884 | 30.00th=[ 2704], 40.00th=[ 3884], 50.00th=[ 5080], 60.00th=[ 6521], 00:18:01.884 | 70.00th=[ 8356], 80.00th=[11469], 90.00th=[17433], 95.00th=[64750], 00:18:01.884 | 99.00th=[78119], 99.50th=[86508], 99.90th=[95945], 99.95th=[96994], 00:18:01.884 | 99.99th=[98042] 00:18:01.884 bw ( KiB/s): min= 840, max=39416, per=100.00%, avg=23830.91, stdev=11649.36, samples=22 00:18:01.884 iops : min= 210, max= 9854, avg=5957.82, stdev=2912.28, samples=22 00:18:01.884 lat (usec) : 500=0.01%, 750=0.33%, 1000=1.85% 00:18:01.884 lat (msec) : 2=11.23%, 4=7.41%, 10=17.63%, 20=8.29%, 50=47.43% 00:18:01.884 lat (msec) : 100=4.51%, 250=1.30%, 500=0.01% 00:18:01.884 cpu : usr=99.36%, sys=0.14%, ctx=77, majf=0, minf=5563 00:18:01.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:01.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.884 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.884 issued rwts: total=65235,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.884 second_half: (groupid=0, jobs=1): err= 0: pid=88224: Mon Dec 16 21:23:49 2024 00:18:01.884 read: IOPS=2706, BW=10.6MiB/s (11.1MB/s)(255MiB/24150msec) 00:18:01.884 slat (nsec): min=3048, max=48963, avg=4576.76, stdev=1325.00 00:18:01.884 clat (usec): min=564, max=309592, avg=36309.79, stdev=23868.72 00:18:01.884 lat (usec): min=568, max=309596, avg=36314.36, stdev=23868.70 00:18:01.884 clat percentiles (msec): 00:18:01.884 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:18:01.884 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:18:01.884 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 52], 00:18:01.884 | 99.00th=[ 176], 99.50th=[ 205], 99.90th=[ 266], 99.95th=[ 275], 00:18:01.884 | 99.99th=[ 305] 00:18:01.884 write: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(256MiB/23215msec); 0 zone resets 00:18:01.884 slat (usec): min=3, max=3791, avg= 5.91, stdev=15.96 00:18:01.884 clat (usec): min=375, max=99301, avg=10930.93, stdev=17262.22 00:18:01.884 lat (usec): min=380, max=99306, avg=10936.85, stdev=17262.28 00:18:01.884 clat percentiles (usec): 00:18:01.884 | 1.00th=[ 766], 5.00th=[ 1029], 10.00th=[ 1237], 20.00th=[ 1647], 00:18:01.884 | 30.00th=[ 3294], 40.00th=[ 4686], 50.00th=[ 5473], 60.00th=[ 6128], 00:18:01.884 | 70.00th=[ 7701], 80.00th=[12125], 90.00th=[25035], 95.00th=[65799], 00:18:01.884 | 99.00th=[79168], 99.50th=[89654], 99.90th=[95945], 99.95th=[96994], 00:18:01.884 | 99.99th=[98042] 00:18:01.884 bw ( KiB/s): min= 56, max=55336, per=92.85%, avg=20969.24, stdev=16122.50, samples=25 00:18:01.884 iops : min= 14, max=13834, avg=5242.28, stdev=4030.61, samples=25 00:18:01.884 lat (usec) : 500=0.01%, 750=0.40%, 1000=1.79% 00:18:01.884 lat (msec) : 2=9.69%, 4=6.02%, 10=21.45%, 20=7.13%, 50=48.05% 00:18:01.884 lat (msec) : 100=4.16%, 250=1.23%, 500=0.07% 00:18:01.884 cpu : usr=99.22%, sys=0.17%, ctx=35, majf=0, minf=5571 00:18:01.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:01.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.884 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.884 issued rwts: total=65371,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.884 00:18:01.884 Run status group 0 (all jobs): 00:18:01.884 READ: bw=21.1MiB/s (22.2MB/s), 10.6MiB/s-10.6MiB/s (11.1MB/s-11.2MB/s), io=510MiB (535MB), run=23957-24150msec 00:18:01.884 WRITE: bw=22.1MiB/s (23.1MB/s), 11.0MiB/s-12.9MiB/s (11.6MB/s-13.6MB/s), io=512MiB (537MB), run=19792-23215msec 00:18:01.884 ----------------------------------------------------- 00:18:01.884 Suppressions used: 00:18:01.884 count bytes template 00:18:01.884 2 10 /usr/src/fio/parse.c 00:18:01.884 2 192 /usr/src/fio/iolog.c 00:18:01.884 1 8 libtcmalloc_minimal.so 00:18:01.884 1 904 libcrypto.so 00:18:01.884 ----------------------------------------------------- 00:18:01.884 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:01.884 21:23:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:02.143 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:02.143 fio-3.35 00:18:02.143 Starting 1 thread 00:18:17.029 00:18:17.029 test: (groupid=0, jobs=1): err= 0: pid=88530: Mon Dec 16 21:24:06 2024 00:18:17.029 read: IOPS=7767, BW=30.3MiB/s (31.8MB/s)(255MiB/8394msec) 00:18:17.029 slat (nsec): min=3030, max=23196, avg=3671.76, stdev=863.59 00:18:17.029 clat (usec): min=474, max=40459, avg=16471.15, stdev=2293.54 00:18:17.029 lat (usec): min=480, max=40466, avg=16474.82, stdev=2293.72 00:18:17.029 clat percentiles (usec): 00:18:17.029 | 1.00th=[14222], 5.00th=[14615], 10.00th=[14746], 20.00th=[15008], 00:18:17.029 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:18:17.029 | 70.00th=[16450], 80.00th=[17433], 90.00th=[20055], 95.00th=[21627], 00:18:17.029 | 99.00th=[23987], 99.50th=[25297], 99.90th=[30540], 99.95th=[36439], 00:18:17.029 | 99.99th=[40633] 00:18:17.029 write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(256MiB/5890msec); 0 zone resets 00:18:17.029 slat (usec): min=4, max=724, avg= 6.20, stdev= 5.76 00:18:17.029 clat (usec): min=433, max=64230, avg=11428.23, stdev=12306.02 00:18:17.029 lat (usec): min=438, max=64237, avg=11434.43, stdev=12306.16 00:18:17.029 clat percentiles (usec): 00:18:17.029 | 1.00th=[ 701], 5.00th=[ 873], 10.00th=[ 1012], 20.00th=[ 1205], 00:18:17.029 | 30.00th=[ 1467], 40.00th=[ 2737], 50.00th=[ 9372], 60.00th=[11469], 00:18:17.029 | 70.00th=[13566], 80.00th=[15926], 90.00th=[31327], 95.00th=[36439], 00:18:17.029 | 99.00th=[54789], 99.50th=[56886], 99.90th=[60031], 99.95th=[60556], 00:18:17.029 | 99.99th=[62129] 00:18:17.029 bw ( KiB/s): min=33416, max=54592, per=98.17%, avg=43690.67, stdev=6761.48, samples=12 00:18:17.029 iops : min= 8354, max=13646, avg=10922.50, stdev=1690.08, samples=12 00:18:17.029 lat (usec) : 500=0.01%, 750=0.93%, 1000=3.79% 00:18:17.029 lat (msec) : 2=13.49%, 4=2.60%, 10=5.70%, 20=60.57%, 50=11.99% 00:18:17.029 lat (msec) : 100=0.92% 00:18:17.029 cpu : usr=99.18%, sys=0.15%, ctx=25, majf=0, minf=5575 00:18:17.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:17.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.029 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.029 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.029 00:18:17.029 Run status group 0 (all jobs): 00:18:17.029 READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=255MiB (267MB), run=8394-8394msec 00:18:17.029 WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=256MiB (268MB), run=5890-5890msec 00:18:17.972 ----------------------------------------------------- 00:18:17.972 Suppressions used: 00:18:17.972 count bytes template 00:18:17.972 1 5 /usr/src/fio/parse.c 00:18:17.972 2 192 /usr/src/fio/iolog.c 00:18:17.972 1 8 libtcmalloc_minimal.so 00:18:17.972 1 904 libcrypto.so 00:18:17.972 ----------------------------------------------------- 00:18:17.972 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:17.972 Remove shared memory files 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70919 /dev/shm/spdk_tgt_trace.pid86891 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:17.972 ************************************ 00:18:17.972 END TEST ftl_fio_basic 00:18:17.972 ************************************ 00:18:17.972 00:18:17.972 real 1m1.634s 00:18:17.972 user 2m17.200s 00:18:17.972 sys 0m2.748s 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.972 21:24:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.972 21:24:07 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:17.972 21:24:07 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:17.972 21:24:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.972 21:24:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:17.972 ************************************ 00:18:17.972 START TEST ftl_bdevperf 00:18:17.972 ************************************ 00:18:17.972 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:17.972 * Looking for test storage... 00:18:17.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:17.972 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:17.972 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:17.973 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.234 --rc genhtml_branch_coverage=1 00:18:18.234 --rc genhtml_function_coverage=1 00:18:18.234 --rc genhtml_legend=1 00:18:18.234 --rc geninfo_all_blocks=1 00:18:18.234 --rc geninfo_unexecuted_blocks=1 00:18:18.234 00:18:18.234 ' 00:18:18.234 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.234 --rc genhtml_branch_coverage=1 00:18:18.235 --rc genhtml_function_coverage=1 00:18:18.235 --rc genhtml_legend=1 00:18:18.235 --rc geninfo_all_blocks=1 00:18:18.235 --rc geninfo_unexecuted_blocks=1 00:18:18.235 00:18:18.235 ' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.235 --rc genhtml_branch_coverage=1 00:18:18.235 --rc genhtml_function_coverage=1 00:18:18.235 --rc genhtml_legend=1 00:18:18.235 --rc geninfo_all_blocks=1 00:18:18.235 --rc geninfo_unexecuted_blocks=1 00:18:18.235 00:18:18.235 ' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.235 --rc genhtml_branch_coverage=1 00:18:18.235 --rc genhtml_function_coverage=1 00:18:18.235 --rc genhtml_legend=1 00:18:18.235 --rc geninfo_all_blocks=1 00:18:18.235 --rc geninfo_unexecuted_blocks=1 00:18:18.235 00:18:18.235 ' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88773 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88773 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88773 ']' 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.235 21:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:18.235 [2024-12-16 21:24:07.785501] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:18.235 [2024-12-16 21:24:07.785750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88773 ] 00:18:18.235 [2024-12-16 21:24:07.931896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.497 [2024-12-16 21:24:07.951967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.068 21:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.068 21:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:19.068 21:24:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:19.068 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:19.068 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:19.068 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:19.069 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:19.069 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:19.330 21:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:19.591 { 00:18:19.591 "name": "nvme0n1", 00:18:19.591 "aliases": [ 00:18:19.591 "babefb3b-fd48-4aa0-8b16-03a80e5ca4af" 00:18:19.591 ], 00:18:19.591 "product_name": "NVMe disk", 00:18:19.591 "block_size": 4096, 00:18:19.591 "num_blocks": 1310720, 00:18:19.591 "uuid": "babefb3b-fd48-4aa0-8b16-03a80e5ca4af", 00:18:19.591 "numa_id": -1, 00:18:19.591 "assigned_rate_limits": { 00:18:19.591 "rw_ios_per_sec": 0, 00:18:19.591 "rw_mbytes_per_sec": 0, 00:18:19.591 "r_mbytes_per_sec": 0, 00:18:19.591 "w_mbytes_per_sec": 0 00:18:19.591 }, 00:18:19.591 "claimed": true, 00:18:19.591 "claim_type": "read_many_write_one", 00:18:19.591 "zoned": false, 00:18:19.591 "supported_io_types": { 00:18:19.591 "read": true, 00:18:19.591 "write": true, 00:18:19.591 "unmap": true, 00:18:19.591 "flush": true, 00:18:19.591 "reset": true, 00:18:19.591 "nvme_admin": true, 00:18:19.591 "nvme_io": true, 00:18:19.591 "nvme_io_md": false, 00:18:19.591 "write_zeroes": true, 00:18:19.591 "zcopy": false, 00:18:19.591 "get_zone_info": false, 00:18:19.591 "zone_management": false, 00:18:19.591 "zone_append": false, 00:18:19.591 "compare": true, 00:18:19.591 "compare_and_write": false, 00:18:19.591 "abort": true, 00:18:19.591 "seek_hole": false, 00:18:19.591 "seek_data": false, 00:18:19.591 "copy": true, 00:18:19.591 "nvme_iov_md": false 00:18:19.591 }, 00:18:19.591 "driver_specific": { 00:18:19.591 "nvme": [ 00:18:19.591 { 00:18:19.591 "pci_address": "0000:00:11.0", 00:18:19.591 "trid": { 00:18:19.591 "trtype": "PCIe", 00:18:19.591 "traddr": "0000:00:11.0" 00:18:19.591 }, 00:18:19.591 "ctrlr_data": { 00:18:19.591 "cntlid": 0, 00:18:19.591 "vendor_id": "0x1b36", 00:18:19.591 "model_number": "QEMU NVMe Ctrl", 00:18:19.591 "serial_number": "12341", 00:18:19.591 "firmware_revision": "8.0.0", 00:18:19.591 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:19.591 "oacs": { 00:18:19.591 "security": 0, 00:18:19.591 "format": 1, 00:18:19.591 "firmware": 0, 00:18:19.591 "ns_manage": 1 00:18:19.591 }, 00:18:19.591 "multi_ctrlr": false, 00:18:19.591 "ana_reporting": false 00:18:19.591 }, 00:18:19.591 "vs": { 00:18:19.591 "nvme_version": "1.4" 00:18:19.591 }, 00:18:19.591 "ns_data": { 00:18:19.591 "id": 1, 00:18:19.591 "can_share": false 00:18:19.591 } 00:18:19.591 } 00:18:19.591 ], 00:18:19.591 "mp_policy": "active_passive" 00:18:19.591 } 00:18:19.591 } 00:18:19.591 ]' 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:19.591 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:19.852 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=8d990f6d-8029-420c-9a85-ea04aca19e5d 00:18:19.852 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:19.852 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d990f6d-8029-420c-9a85-ea04aca19e5d 00:18:20.112 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:20.373 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d193c359-4ece-434b-9bce-cd83ac04cafc 00:18:20.373 21:24:09 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d193c359-4ece-434b-9bce-cd83ac04cafc 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:20.373 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:20.634 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:20.634 { 00:18:20.634 "name": "8d85ef15-b949-4b01-b544-5887b7c82bd5", 00:18:20.634 "aliases": [ 00:18:20.634 "lvs/nvme0n1p0" 00:18:20.634 ], 00:18:20.634 "product_name": "Logical Volume", 00:18:20.634 "block_size": 4096, 00:18:20.634 "num_blocks": 26476544, 00:18:20.634 "uuid": "8d85ef15-b949-4b01-b544-5887b7c82bd5", 00:18:20.634 "assigned_rate_limits": { 00:18:20.634 "rw_ios_per_sec": 0, 00:18:20.634 "rw_mbytes_per_sec": 0, 00:18:20.634 "r_mbytes_per_sec": 0, 00:18:20.634 "w_mbytes_per_sec": 0 00:18:20.634 }, 00:18:20.634 "claimed": false, 00:18:20.634 "zoned": false, 00:18:20.634 "supported_io_types": { 00:18:20.634 "read": true, 00:18:20.634 "write": true, 00:18:20.634 "unmap": true, 00:18:20.634 "flush": false, 00:18:20.634 "reset": true, 00:18:20.634 "nvme_admin": false, 00:18:20.634 "nvme_io": false, 00:18:20.634 "nvme_io_md": false, 00:18:20.634 "write_zeroes": true, 00:18:20.634 "zcopy": false, 00:18:20.634 "get_zone_info": false, 00:18:20.634 "zone_management": false, 00:18:20.634 "zone_append": false, 00:18:20.634 "compare": false, 00:18:20.634 "compare_and_write": false, 00:18:20.634 "abort": false, 00:18:20.634 "seek_hole": true, 00:18:20.634 "seek_data": true, 00:18:20.634 "copy": false, 00:18:20.634 "nvme_iov_md": false 00:18:20.634 }, 00:18:20.634 "driver_specific": { 00:18:20.634 "lvol": { 00:18:20.634 "lvol_store_uuid": "d193c359-4ece-434b-9bce-cd83ac04cafc", 00:18:20.634 "base_bdev": "nvme0n1", 00:18:20.634 "thin_provision": true, 00:18:20.634 "num_allocated_clusters": 0, 00:18:20.634 "snapshot": false, 00:18:20.634 "clone": false, 00:18:20.634 "esnap_clone": false 00:18:20.634 } 00:18:20.634 } 00:18:20.634 } 00:18:20.634 ]' 00:18:20.634 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:20.634 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:20.634 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:20.895 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:20.895 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:20.895 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:20.895 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:20.895 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:20.895 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.157 { 00:18:21.157 "name": "8d85ef15-b949-4b01-b544-5887b7c82bd5", 00:18:21.157 "aliases": [ 00:18:21.157 "lvs/nvme0n1p0" 00:18:21.157 ], 00:18:21.157 "product_name": "Logical Volume", 00:18:21.157 "block_size": 4096, 00:18:21.157 "num_blocks": 26476544, 00:18:21.157 "uuid": "8d85ef15-b949-4b01-b544-5887b7c82bd5", 00:18:21.157 "assigned_rate_limits": { 00:18:21.157 "rw_ios_per_sec": 0, 00:18:21.157 "rw_mbytes_per_sec": 0, 00:18:21.157 "r_mbytes_per_sec": 0, 00:18:21.157 "w_mbytes_per_sec": 0 00:18:21.157 }, 00:18:21.157 "claimed": false, 00:18:21.157 "zoned": false, 00:18:21.157 "supported_io_types": { 00:18:21.157 "read": true, 00:18:21.157 "write": true, 00:18:21.157 "unmap": true, 00:18:21.157 "flush": false, 00:18:21.157 "reset": true, 00:18:21.157 "nvme_admin": false, 00:18:21.157 "nvme_io": false, 00:18:21.157 "nvme_io_md": false, 00:18:21.157 "write_zeroes": true, 00:18:21.157 "zcopy": false, 00:18:21.157 "get_zone_info": false, 00:18:21.157 "zone_management": false, 00:18:21.157 "zone_append": false, 00:18:21.157 "compare": false, 00:18:21.157 "compare_and_write": false, 00:18:21.157 "abort": false, 00:18:21.157 "seek_hole": true, 00:18:21.157 "seek_data": true, 00:18:21.157 "copy": false, 00:18:21.157 "nvme_iov_md": false 00:18:21.157 }, 00:18:21.157 "driver_specific": { 00:18:21.157 "lvol": { 00:18:21.157 "lvol_store_uuid": "d193c359-4ece-434b-9bce-cd83ac04cafc", 00:18:21.157 "base_bdev": "nvme0n1", 00:18:21.157 "thin_provision": true, 00:18:21.157 "num_allocated_clusters": 0, 00:18:21.157 "snapshot": false, 00:18:21.157 "clone": false, 00:18:21.157 "esnap_clone": false 00:18:21.157 } 00:18:21.157 } 00:18:21.157 } 00:18:21.157 ]' 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:21.157 21:24:10 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:21.417 21:24:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:21.417 21:24:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:21.417 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:21.417 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.417 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:21.417 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:21.417 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d85ef15-b949-4b01-b544-5887b7c82bd5 00:18:21.675 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.675 { 00:18:21.675 "name": "8d85ef15-b949-4b01-b544-5887b7c82bd5", 00:18:21.675 "aliases": [ 00:18:21.675 "lvs/nvme0n1p0" 00:18:21.675 ], 00:18:21.675 "product_name": "Logical Volume", 00:18:21.675 "block_size": 4096, 00:18:21.675 "num_blocks": 26476544, 00:18:21.675 "uuid": "8d85ef15-b949-4b01-b544-5887b7c82bd5", 00:18:21.675 "assigned_rate_limits": { 00:18:21.675 "rw_ios_per_sec": 0, 00:18:21.675 "rw_mbytes_per_sec": 0, 00:18:21.675 "r_mbytes_per_sec": 0, 00:18:21.675 "w_mbytes_per_sec": 0 00:18:21.675 }, 00:18:21.675 "claimed": false, 00:18:21.675 "zoned": false, 00:18:21.675 "supported_io_types": { 00:18:21.675 "read": true, 00:18:21.675 "write": true, 00:18:21.675 "unmap": true, 00:18:21.675 "flush": false, 00:18:21.675 "reset": true, 00:18:21.675 "nvme_admin": false, 00:18:21.675 "nvme_io": false, 00:18:21.675 "nvme_io_md": false, 00:18:21.675 "write_zeroes": true, 00:18:21.675 "zcopy": false, 00:18:21.675 "get_zone_info": false, 00:18:21.675 "zone_management": false, 00:18:21.675 "zone_append": false, 00:18:21.675 "compare": false, 00:18:21.675 "compare_and_write": false, 00:18:21.675 "abort": false, 00:18:21.675 "seek_hole": true, 00:18:21.675 "seek_data": true, 00:18:21.675 "copy": false, 00:18:21.675 "nvme_iov_md": false 00:18:21.675 }, 00:18:21.675 "driver_specific": { 00:18:21.675 "lvol": { 00:18:21.675 "lvol_store_uuid": "d193c359-4ece-434b-9bce-cd83ac04cafc", 00:18:21.675 "base_bdev": "nvme0n1", 00:18:21.675 "thin_provision": true, 00:18:21.676 "num_allocated_clusters": 0, 00:18:21.676 "snapshot": false, 00:18:21.676 "clone": false, 00:18:21.676 "esnap_clone": false 00:18:21.676 } 00:18:21.676 } 00:18:21.676 } 00:18:21.676 ]' 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:21.676 21:24:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8d85ef15-b949-4b01-b544-5887b7c82bd5 -c nvc0n1p0 --l2p_dram_limit 20 00:18:21.937 [2024-12-16 21:24:11.508329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.508371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:21.937 [2024-12-16 21:24:11.508383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:21.937 [2024-12-16 21:24:11.508389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.508437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.508444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.937 [2024-12-16 21:24:11.508454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:21.937 [2024-12-16 21:24:11.508461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.508479] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:21.937 [2024-12-16 21:24:11.508712] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:21.937 [2024-12-16 21:24:11.508726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.508734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.937 [2024-12-16 21:24:11.508743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:18:21.937 [2024-12-16 21:24:11.508749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.508774] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a5a4dece-2dca-4155-abdd-d2730c80ba3f 00:18:21.937 [2024-12-16 21:24:11.509780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.509896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:21.937 [2024-12-16 21:24:11.509915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:21.937 [2024-12-16 21:24:11.509924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.514515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.514547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.937 [2024-12-16 21:24:11.514555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.525 ms 00:18:21.937 [2024-12-16 21:24:11.514565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.514621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.514639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.937 [2024-12-16 21:24:11.514650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:21.937 [2024-12-16 21:24:11.514657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.514691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.514704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:21.937 [2024-12-16 21:24:11.514710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:21.937 [2024-12-16 21:24:11.514717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.514733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:21.937 [2024-12-16 21:24:11.516070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.516093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.937 [2024-12-16 21:24:11.516103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:18:21.937 [2024-12-16 21:24:11.516109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.516135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.516142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:21.937 [2024-12-16 21:24:11.516150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:21.937 [2024-12-16 21:24:11.516156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.516167] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:21.937 [2024-12-16 21:24:11.516278] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:21.937 [2024-12-16 21:24:11.516288] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:21.937 [2024-12-16 21:24:11.516296] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:21.937 [2024-12-16 21:24:11.516305] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516311] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516321] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:21.937 [2024-12-16 21:24:11.516330] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:21.937 [2024-12-16 21:24:11.516340] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:21.937 [2024-12-16 21:24:11.516351] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:21.937 [2024-12-16 21:24:11.516360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.516365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:21.937 [2024-12-16 21:24:11.516372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:18:21.937 [2024-12-16 21:24:11.516378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.516450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.937 [2024-12-16 21:24:11.516461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:21.937 [2024-12-16 21:24:11.516469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:21.937 [2024-12-16 21:24:11.516474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.937 [2024-12-16 21:24:11.516549] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:21.937 [2024-12-16 21:24:11.516561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:21.937 [2024-12-16 21:24:11.516569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:21.937 [2024-12-16 21:24:11.516591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:21.937 [2024-12-16 21:24:11.516609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.937 [2024-12-16 21:24:11.516621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:21.937 [2024-12-16 21:24:11.516635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:21.937 [2024-12-16 21:24:11.516644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.937 [2024-12-16 21:24:11.516649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:21.937 [2024-12-16 21:24:11.516655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:21.937 [2024-12-16 21:24:11.516661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:21.937 [2024-12-16 21:24:11.516673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:21.937 [2024-12-16 21:24:11.516691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:21.937 [2024-12-16 21:24:11.516708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:21.937 [2024-12-16 21:24:11.516725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:21.937 [2024-12-16 21:24:11.516753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.937 [2024-12-16 21:24:11.516765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:21.937 [2024-12-16 21:24:11.516771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.937 [2024-12-16 21:24:11.516782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:21.937 [2024-12-16 21:24:11.516787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:21.937 [2024-12-16 21:24:11.516793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.937 [2024-12-16 21:24:11.516799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:21.937 [2024-12-16 21:24:11.516805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:21.937 [2024-12-16 21:24:11.516810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.937 [2024-12-16 21:24:11.516818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:21.938 [2024-12-16 21:24:11.516826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:21.938 [2024-12-16 21:24:11.516836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.938 [2024-12-16 21:24:11.516844] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:21.938 [2024-12-16 21:24:11.516858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:21.938 [2024-12-16 21:24:11.516866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.938 [2024-12-16 21:24:11.516873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.938 [2024-12-16 21:24:11.516880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:21.938 [2024-12-16 21:24:11.516886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:21.938 [2024-12-16 21:24:11.516892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:21.938 [2024-12-16 21:24:11.516898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:21.938 [2024-12-16 21:24:11.516903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:21.938 [2024-12-16 21:24:11.516910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:21.938 [2024-12-16 21:24:11.516917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:21.938 [2024-12-16 21:24:11.516929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.938 [2024-12-16 21:24:11.516936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:21.938 [2024-12-16 21:24:11.516943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:21.938 [2024-12-16 21:24:11.516956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:21.938 [2024-12-16 21:24:11.516963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:21.938 [2024-12-16 21:24:11.516968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:21.938 [2024-12-16 21:24:11.516977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:21.938 [2024-12-16 21:24:11.516982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:21.938 [2024-12-16 21:24:11.516993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:21.938 [2024-12-16 21:24:11.517001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:21.938 [2024-12-16 21:24:11.517012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:21.938 [2024-12-16 21:24:11.517021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:21.938 [2024-12-16 21:24:11.517033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:21.938 [2024-12-16 21:24:11.517042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:21.938 [2024-12-16 21:24:11.517051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:21.938 [2024-12-16 21:24:11.517057] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:21.938 [2024-12-16 21:24:11.517066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.938 [2024-12-16 21:24:11.517074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:21.938 [2024-12-16 21:24:11.517084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:21.938 [2024-12-16 21:24:11.517093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:21.938 [2024-12-16 21:24:11.517100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:21.938 [2024-12-16 21:24:11.517107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.938 [2024-12-16 21:24:11.517116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:21.938 [2024-12-16 21:24:11.517123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:18:21.938 [2024-12-16 21:24:11.517130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.938 [2024-12-16 21:24:11.517156] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:21.938 [2024-12-16 21:24:11.517168] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:26.235 [2024-12-16 21:24:15.488982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.489048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:26.235 [2024-12-16 21:24:15.489072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3971.810 ms 00:18:26.235 [2024-12-16 21:24:15.489085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.498315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.498367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.235 [2024-12-16 21:24:15.498390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.140 ms 00:18:26.235 [2024-12-16 21:24:15.498407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.498536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.498557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:26.235 [2024-12-16 21:24:15.498580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:26.235 [2024-12-16 21:24:15.498595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.520989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.521250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.235 [2024-12-16 21:24:15.521293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.324 ms 00:18:26.235 [2024-12-16 21:24:15.521333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.521405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.521450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.235 [2024-12-16 21:24:15.521477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:26.235 [2024-12-16 21:24:15.521506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.522113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.522171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.235 [2024-12-16 21:24:15.522198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:18:26.235 [2024-12-16 21:24:15.522229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.522524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.522572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.235 [2024-12-16 21:24:15.522603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:18:26.235 [2024-12-16 21:24:15.522652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.529226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.529267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.235 [2024-12-16 21:24:15.529285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.529 ms 00:18:26.235 [2024-12-16 21:24:15.529298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.538106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:26.235 [2024-12-16 21:24:15.544093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.544225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:26.235 [2024-12-16 21:24:15.544251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.722 ms 00:18:26.235 [2024-12-16 21:24:15.544263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.626968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.627146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:26.235 [2024-12-16 21:24:15.627183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.666 ms 00:18:26.235 [2024-12-16 21:24:15.627199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.627446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.235 [2024-12-16 21:24:15.627471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:26.235 [2024-12-16 21:24:15.627489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:26.235 [2024-12-16 21:24:15.627502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.235 [2024-12-16 21:24:15.632901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.632979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:26.236 [2024-12-16 21:24:15.632999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.365 ms 00:18:26.236 [2024-12-16 21:24:15.633012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.637765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.637921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:26.236 [2024-12-16 21:24:15.637949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.689 ms 00:18:26.236 [2024-12-16 21:24:15.637960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.638367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.638402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:26.236 [2024-12-16 21:24:15.638422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:26.236 [2024-12-16 21:24:15.638446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.684116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.684166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:26.236 [2024-12-16 21:24:15.684186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.613 ms 00:18:26.236 [2024-12-16 21:24:15.684199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.690657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.690702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:26.236 [2024-12-16 21:24:15.690716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:18:26.236 [2024-12-16 21:24:15.690725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.696351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.696393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:26.236 [2024-12-16 21:24:15.696406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.576 ms 00:18:26.236 [2024-12-16 21:24:15.696413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.702137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.702183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:26.236 [2024-12-16 21:24:15.702200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.677 ms 00:18:26.236 [2024-12-16 21:24:15.702208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.702257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.702270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:26.236 [2024-12-16 21:24:15.702282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:26.236 [2024-12-16 21:24:15.702289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.702357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.236 [2024-12-16 21:24:15.702367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:26.236 [2024-12-16 21:24:15.702383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:26.236 [2024-12-16 21:24:15.702394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.236 [2024-12-16 21:24:15.703463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4194.652 ms, result 0 00:18:26.236 { 00:18:26.236 "name": "ftl0", 00:18:26.236 "uuid": "a5a4dece-2dca-4155-abdd-d2730c80ba3f" 00:18:26.236 } 00:18:26.236 21:24:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:26.236 21:24:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:26.236 21:24:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:26.496 21:24:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:26.496 [2024-12-16 21:24:16.026124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:26.496 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:26.496 Zero copy mechanism will not be used. 00:18:26.496 Running I/O for 4 seconds... 00:18:28.374 796.00 IOPS, 52.86 MiB/s [2024-12-16T21:24:19.454Z] 773.50 IOPS, 51.37 MiB/s [2024-12-16T21:24:20.388Z] 759.67 IOPS, 50.45 MiB/s [2024-12-16T21:24:20.388Z] 774.00 IOPS, 51.40 MiB/s 00:18:30.688 Latency(us) 00:18:30.688 [2024-12-16T21:24:20.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.688 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:30.688 ftl0 : 4.00 773.97 51.40 0.00 0.00 1372.45 316.65 3024.74 00:18:30.688 [2024-12-16T21:24:20.388Z] =================================================================================================================== 00:18:30.688 [2024-12-16T21:24:20.388Z] Total : 773.97 51.40 0.00 0.00 1372.45 316.65 3024.74 00:18:30.688 { 00:18:30.688 "results": [ 00:18:30.688 { 00:18:30.688 "job": "ftl0", 00:18:30.688 "core_mask": "0x1", 00:18:30.688 "workload": "randwrite", 00:18:30.688 "status": "finished", 00:18:30.688 "queue_depth": 1, 00:18:30.688 "io_size": 69632, 00:18:30.688 "runtime": 4.001444, 00:18:30.688 "iops": 773.9705966146221, 00:18:30.688 "mibps": 51.396484931439744, 00:18:30.688 "io_failed": 0, 00:18:30.688 "io_timeout": 0, 00:18:30.688 "avg_latency_us": 1372.4453699610044, 00:18:30.688 "min_latency_us": 316.6523076923077, 00:18:30.688 "max_latency_us": 3024.7384615384617 00:18:30.688 } 00:18:30.688 ], 00:18:30.688 "core_count": 1 00:18:30.688 } 00:18:30.688 [2024-12-16 21:24:20.034126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:30.688 21:24:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:30.688 [2024-12-16 21:24:20.142668] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftlRunning I/O for 4 seconds... 00:18:30.688 0 00:18:32.571 7933.00 IOPS, 30.99 MiB/s [2024-12-16T21:24:23.213Z] 6986.00 IOPS, 27.29 MiB/s [2024-12-16T21:24:24.152Z] 6392.00 IOPS, 24.97 MiB/s [2024-12-16T21:24:24.413Z] 6168.00 IOPS, 24.09 MiB/s 00:18:34.713 Latency(us) 00:18:34.713 [2024-12-16T21:24:24.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.713 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.713 ftl0 : 4.03 6159.29 24.06 0.00 0.00 20718.09 247.34 44766.13 00:18:34.713 [2024-12-16T21:24:24.413Z] =================================================================================================================== 00:18:34.713 [2024-12-16T21:24:24.413Z] Total : 6159.29 24.06 0.00 0.00 20718.09 0.00 44766.13 00:18:34.713 [2024-12-16 21:24:24.174806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:34.713 { 00:18:34.713 "results": [ 00:18:34.713 { 00:18:34.713 "job": "ftl0", 00:18:34.713 "core_mask": "0x1", 00:18:34.713 "workload": "randwrite", 00:18:34.713 "status": "finished", 00:18:34.713 "queue_depth": 128, 00:18:34.713 "io_size": 4096, 00:18:34.713 "runtime": 4.025791, 00:18:34.713 "iops": 6159.286460722874, 00:18:34.713 "mibps": 24.059712737198726, 00:18:34.713 "io_failed": 0, 00:18:34.713 "io_timeout": 0, 00:18:34.713 "avg_latency_us": 20718.088862967972, 00:18:34.713 "min_latency_us": 247.3353846153846, 00:18:34.713 "max_latency_us": 44766.12923076923 00:18:34.713 } 00:18:34.713 ], 00:18:34.713 "core_count": 1 00:18:34.713 } 00:18:34.713 21:24:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:34.713 [2024-12-16 21:24:24.287343] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:34.713 Running I/O for 4 seconds... 00:18:37.027 5370.00 IOPS, 20.98 MiB/s [2024-12-16T21:24:27.296Z] 5838.00 IOPS, 22.80 MiB/s [2024-12-16T21:24:28.680Z] 5631.33 IOPS, 22.00 MiB/s [2024-12-16T21:24:28.680Z] 5351.00 IOPS, 20.90 MiB/s 00:18:38.980 Latency(us) 00:18:38.980 [2024-12-16T21:24:28.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.980 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:38.980 Verification LBA range: start 0x0 length 0x1400000 00:18:38.980 ftl0 : 4.01 5364.08 20.95 0.00 0.00 23793.34 294.60 45169.43 00:18:38.980 [2024-12-16T21:24:28.680Z] =================================================================================================================== 00:18:38.980 [2024-12-16T21:24:28.680Z] Total : 5364.08 20.95 0.00 0.00 23793.34 0.00 45169.43 00:18:38.980 [2024-12-16 21:24:28.309782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:18:38.980 "results": [ 00:18:38.980 { 00:18:38.980 "job": "ftl0", 00:18:38.980 "core_mask": "0x1", 00:18:38.980 "workload": "verify", 00:18:38.980 "status": "finished", 00:18:38.980 "verify_range": { 00:18:38.980 "start": 0, 00:18:38.980 "length": 20971520 00:18:38.980 }, 00:18:38.980 "queue_depth": 128, 00:18:38.980 "io_size": 4096, 00:18:38.980 "runtime": 4.013363, 00:18:38.980 "iops": 5364.079949907347, 00:18:38.980 "mibps": 20.953437304325575, 00:18:38.980 "io_failed": 0, 00:18:38.980 "io_timeout": 0, 00:18:38.980 "avg_latency_us": 23793.337243804133, 00:18:38.980 "min_latency_us": 294.5969230769231, 00:18:38.980 "max_latency_us": 45169.42769230769 00:18:38.980 } 00:18:38.980 ], 00:18:38.980 "core_count": 1 00:18:38.980 } 00:18:38.980 l0 00:18:38.980 21:24:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:38.980 [2024-12-16 21:24:28.526168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.980 [2024-12-16 21:24:28.526231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:38.980 [2024-12-16 21:24:28.526248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:38.980 [2024-12-16 21:24:28.526263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.980 [2024-12-16 21:24:28.526293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.980 [2024-12-16 21:24:28.527008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.980 [2024-12-16 21:24:28.527054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:38.980 [2024-12-16 21:24:28.527076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:18:38.980 [2024-12-16 21:24:28.527088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.980 [2024-12-16 21:24:28.530351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.980 [2024-12-16 21:24:28.530401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:38.980 [2024-12-16 21:24:28.530418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:18:38.980 [2024-12-16 21:24:28.530433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.756265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.756331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:39.242 [2024-12-16 21:24:28.756348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 225.812 ms 00:18:39.242 [2024-12-16 21:24:28.756360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.762544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.762590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:39.242 [2024-12-16 21:24:28.762603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.143 ms 00:18:39.242 [2024-12-16 21:24:28.762614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.765538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.765594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:39.242 [2024-12-16 21:24:28.765604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:18:39.242 [2024-12-16 21:24:28.765614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.772698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.772760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:39.242 [2024-12-16 21:24:28.772773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.027 ms 00:18:39.242 [2024-12-16 21:24:28.772789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.772936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.772967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:39.242 [2024-12-16 21:24:28.772978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:39.242 [2024-12-16 21:24:28.772989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.776583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.776801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:39.242 [2024-12-16 21:24:28.776825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.570 ms 00:18:39.242 [2024-12-16 21:24:28.776834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.779739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.779795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:39.242 [2024-12-16 21:24:28.779805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:18:39.242 [2024-12-16 21:24:28.779815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.782262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.782317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:39.242 [2024-12-16 21:24:28.782327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:18:39.242 [2024-12-16 21:24:28.782340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.784486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.242 [2024-12-16 21:24:28.784542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:39.242 [2024-12-16 21:24:28.784552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.078 ms 00:18:39.242 [2024-12-16 21:24:28.784561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.242 [2024-12-16 21:24:28.784602] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:39.242 [2024-12-16 21:24:28.784650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.784990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:39.242 [2024-12-16 21:24:28.785180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:39.243 [2024-12-16 21:24:28.785601] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:39.243 [2024-12-16 21:24:28.785610] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a5a4dece-2dca-4155-abdd-d2730c80ba3f 00:18:39.243 [2024-12-16 21:24:28.785641] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:39.243 [2024-12-16 21:24:28.785649] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:39.243 [2024-12-16 21:24:28.785659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:39.243 [2024-12-16 21:24:28.785668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:39.243 [2024-12-16 21:24:28.785680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:39.243 [2024-12-16 21:24:28.785692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:39.243 [2024-12-16 21:24:28.785702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:39.243 [2024-12-16 21:24:28.785709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:39.243 [2024-12-16 21:24:28.785717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:39.243 [2024-12-16 21:24:28.785725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.243 [2024-12-16 21:24:28.785740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:39.243 [2024-12-16 21:24:28.785751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:18:39.243 [2024-12-16 21:24:28.785760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.788060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.243 [2024-12-16 21:24:28.788094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:39.243 [2024-12-16 21:24:28.788104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:18:39.243 [2024-12-16 21:24:28.788114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.788232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.243 [2024-12-16 21:24:28.788247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:39.243 [2024-12-16 21:24:28.788255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:39.243 [2024-12-16 21:24:28.788273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.796135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.796191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.243 [2024-12-16 21:24:28.796207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.796218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.796280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.796294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.243 [2024-12-16 21:24:28.796303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.796313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.796386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.796400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.243 [2024-12-16 21:24:28.796408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.796418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.796433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.796444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.243 [2024-12-16 21:24:28.796453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.796466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.810963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.811032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.243 [2024-12-16 21:24:28.811043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.811053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.823295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.823363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.243 [2024-12-16 21:24:28.823379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.823389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.823467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.823480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.243 [2024-12-16 21:24:28.823489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.823500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.823543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.823556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.243 [2024-12-16 21:24:28.823565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.823581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.823687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.823702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.243 [2024-12-16 21:24:28.823711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.243 [2024-12-16 21:24:28.823721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.243 [2024-12-16 21:24:28.823758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.243 [2024-12-16 21:24:28.823770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:39.244 [2024-12-16 21:24:28.823779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.244 [2024-12-16 21:24:28.823789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.244 [2024-12-16 21:24:28.823835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.244 [2024-12-16 21:24:28.823848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.244 [2024-12-16 21:24:28.823858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.244 [2024-12-16 21:24:28.823867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.244 [2024-12-16 21:24:28.823914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.244 [2024-12-16 21:24:28.823927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.244 [2024-12-16 21:24:28.823937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.244 [2024-12-16 21:24:28.823953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.244 [2024-12-16 21:24:28.824097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 297.888 ms, result 0 00:18:39.244 true 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88773 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88773 ']' 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88773 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88773 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88773' 00:18:39.244 killing process with pid 88773 00:18:39.244 Received shutdown signal, test time was about 4.000000 seconds 00:18:39.244 00:18:39.244 Latency(us) 00:18:39.244 [2024-12-16T21:24:28.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.244 [2024-12-16T21:24:28.944Z] =================================================================================================================== 00:18:39.244 [2024-12-16T21:24:28.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88773 00:18:39.244 21:24:28 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88773 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:39.505 Remove shared memory files 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:39.505 ************************************ 00:18:39.505 END TEST ftl_bdevperf 00:18:39.505 ************************************ 00:18:39.505 00:18:39.505 real 0m21.565s 00:18:39.505 user 0m24.169s 00:18:39.505 sys 0m0.892s 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.505 21:24:29 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:39.505 21:24:29 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:39.505 21:24:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:39.505 21:24:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.505 21:24:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:39.505 ************************************ 00:18:39.505 START TEST ftl_trim 00:18:39.505 ************************************ 00:18:39.505 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:39.775 * Looking for test storage... 00:18:39.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:39.775 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:39.775 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:18:39.775 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:39.775 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.775 21:24:29 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:39.775 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.775 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:39.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.775 --rc genhtml_branch_coverage=1 00:18:39.775 --rc genhtml_function_coverage=1 00:18:39.775 --rc genhtml_legend=1 00:18:39.775 --rc geninfo_all_blocks=1 00:18:39.775 --rc geninfo_unexecuted_blocks=1 00:18:39.775 00:18:39.775 ' 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:39.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.776 --rc genhtml_branch_coverage=1 00:18:39.776 --rc genhtml_function_coverage=1 00:18:39.776 --rc genhtml_legend=1 00:18:39.776 --rc geninfo_all_blocks=1 00:18:39.776 --rc geninfo_unexecuted_blocks=1 00:18:39.776 00:18:39.776 ' 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:39.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.776 --rc genhtml_branch_coverage=1 00:18:39.776 --rc genhtml_function_coverage=1 00:18:39.776 --rc genhtml_legend=1 00:18:39.776 --rc geninfo_all_blocks=1 00:18:39.776 --rc geninfo_unexecuted_blocks=1 00:18:39.776 00:18:39.776 ' 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:39.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.776 --rc genhtml_branch_coverage=1 00:18:39.776 --rc genhtml_function_coverage=1 00:18:39.776 --rc genhtml_legend=1 00:18:39.776 --rc geninfo_all_blocks=1 00:18:39.776 --rc geninfo_unexecuted_blocks=1 00:18:39.776 00:18:39.776 ' 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89116 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89116 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89116 ']' 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.776 21:24:29 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.776 21:24:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:39.776 [2024-12-16 21:24:29.450519] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:39.776 [2024-12-16 21:24:29.450702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89116 ] 00:18:40.036 [2024-12-16 21:24:29.600450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:40.036 [2024-12-16 21:24:29.633671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.036 [2024-12-16 21:24:29.633827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.036 [2024-12-16 21:24:29.633911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.609 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.609 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:40.609 21:24:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:40.609 21:24:30 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:40.609 21:24:30 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:40.609 21:24:30 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:40.609 21:24:30 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:40.609 21:24:30 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:41.182 21:24:30 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:41.182 21:24:30 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:41.182 21:24:30 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:41.182 { 00:18:41.182 "name": "nvme0n1", 00:18:41.182 "aliases": [ 00:18:41.182 "a7a64873-9585-4d75-aacb-6085e3f8fcaa" 00:18:41.182 ], 00:18:41.182 "product_name": "NVMe disk", 00:18:41.182 "block_size": 4096, 00:18:41.182 "num_blocks": 1310720, 00:18:41.182 "uuid": "a7a64873-9585-4d75-aacb-6085e3f8fcaa", 00:18:41.182 "numa_id": -1, 00:18:41.182 "assigned_rate_limits": { 00:18:41.182 "rw_ios_per_sec": 0, 00:18:41.182 "rw_mbytes_per_sec": 0, 00:18:41.182 "r_mbytes_per_sec": 0, 00:18:41.182 "w_mbytes_per_sec": 0 00:18:41.182 }, 00:18:41.182 "claimed": true, 00:18:41.182 "claim_type": "read_many_write_one", 00:18:41.182 "zoned": false, 00:18:41.182 "supported_io_types": { 00:18:41.182 "read": true, 00:18:41.182 "write": true, 00:18:41.182 "unmap": true, 00:18:41.182 "flush": true, 00:18:41.182 "reset": true, 00:18:41.182 "nvme_admin": true, 00:18:41.182 "nvme_io": true, 00:18:41.182 "nvme_io_md": false, 00:18:41.182 "write_zeroes": true, 00:18:41.182 "zcopy": false, 00:18:41.182 "get_zone_info": false, 00:18:41.182 "zone_management": false, 00:18:41.182 "zone_append": false, 00:18:41.182 "compare": true, 00:18:41.182 "compare_and_write": false, 00:18:41.182 "abort": true, 00:18:41.182 "seek_hole": false, 00:18:41.182 "seek_data": false, 00:18:41.182 "copy": true, 00:18:41.182 "nvme_iov_md": false 00:18:41.182 }, 00:18:41.182 "driver_specific": { 00:18:41.182 "nvme": [ 00:18:41.182 { 00:18:41.182 "pci_address": "0000:00:11.0", 00:18:41.182 "trid": { 00:18:41.182 "trtype": "PCIe", 00:18:41.182 "traddr": "0000:00:11.0" 00:18:41.182 }, 00:18:41.182 "ctrlr_data": { 00:18:41.182 "cntlid": 0, 00:18:41.182 "vendor_id": "0x1b36", 00:18:41.182 "model_number": "QEMU NVMe Ctrl", 00:18:41.182 "serial_number": "12341", 00:18:41.182 "firmware_revision": "8.0.0", 00:18:41.182 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:41.182 "oacs": { 00:18:41.182 "security": 0, 00:18:41.182 "format": 1, 00:18:41.182 "firmware": 0, 00:18:41.182 "ns_manage": 1 00:18:41.182 }, 00:18:41.182 "multi_ctrlr": false, 00:18:41.182 "ana_reporting": false 00:18:41.182 }, 00:18:41.182 "vs": { 00:18:41.182 "nvme_version": "1.4" 00:18:41.182 }, 00:18:41.182 "ns_data": { 00:18:41.182 "id": 1, 00:18:41.182 "can_share": false 00:18:41.182 } 00:18:41.182 } 00:18:41.182 ], 00:18:41.182 "mp_policy": "active_passive" 00:18:41.182 } 00:18:41.182 } 00:18:41.182 ]' 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:41.182 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:41.494 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:41.494 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:41.494 21:24:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:41.494 21:24:30 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:41.494 21:24:30 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:41.494 21:24:30 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:41.494 21:24:30 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:41.494 21:24:30 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:41.494 21:24:31 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d193c359-4ece-434b-9bce-cd83ac04cafc 00:18:41.494 21:24:31 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:41.494 21:24:31 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d193c359-4ece-434b-9bce-cd83ac04cafc 00:18:41.756 21:24:31 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:42.017 21:24:31 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=9da97f88-1f6b-4134-a1b0-1cbc21e1f200 00:18:42.017 21:24:31 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9da97f88-1f6b-4134-a1b0-1cbc21e1f200 00:18:42.278 21:24:31 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.278 21:24:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.278 21:24:31 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:42.278 21:24:31 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:42.278 21:24:31 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.278 21:24:31 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:42.278 21:24:31 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.278 21:24:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.278 21:24:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:42.278 21:24:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:42.278 21:24:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:42.278 21:24:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.540 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:42.540 { 00:18:42.540 "name": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:42.540 "aliases": [ 00:18:42.540 "lvs/nvme0n1p0" 00:18:42.540 ], 00:18:42.540 "product_name": "Logical Volume", 00:18:42.540 "block_size": 4096, 00:18:42.540 "num_blocks": 26476544, 00:18:42.540 "uuid": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:42.540 "assigned_rate_limits": { 00:18:42.540 "rw_ios_per_sec": 0, 00:18:42.540 "rw_mbytes_per_sec": 0, 00:18:42.540 "r_mbytes_per_sec": 0, 00:18:42.540 "w_mbytes_per_sec": 0 00:18:42.540 }, 00:18:42.540 "claimed": false, 00:18:42.540 "zoned": false, 00:18:42.540 "supported_io_types": { 00:18:42.540 "read": true, 00:18:42.540 "write": true, 00:18:42.540 "unmap": true, 00:18:42.540 "flush": false, 00:18:42.540 "reset": true, 00:18:42.540 "nvme_admin": false, 00:18:42.540 "nvme_io": false, 00:18:42.540 "nvme_io_md": false, 00:18:42.540 "write_zeroes": true, 00:18:42.540 "zcopy": false, 00:18:42.540 "get_zone_info": false, 00:18:42.540 "zone_management": false, 00:18:42.540 "zone_append": false, 00:18:42.540 "compare": false, 00:18:42.540 "compare_and_write": false, 00:18:42.540 "abort": false, 00:18:42.540 "seek_hole": true, 00:18:42.540 "seek_data": true, 00:18:42.540 "copy": false, 00:18:42.540 "nvme_iov_md": false 00:18:42.540 }, 00:18:42.540 "driver_specific": { 00:18:42.540 "lvol": { 00:18:42.540 "lvol_store_uuid": "9da97f88-1f6b-4134-a1b0-1cbc21e1f200", 00:18:42.540 "base_bdev": "nvme0n1", 00:18:42.540 "thin_provision": true, 00:18:42.540 "num_allocated_clusters": 0, 00:18:42.540 "snapshot": false, 00:18:42.540 "clone": false, 00:18:42.540 "esnap_clone": false 00:18:42.540 } 00:18:42.540 } 00:18:42.540 } 00:18:42.540 ]' 00:18:42.540 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:42.540 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:42.540 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:42.540 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:42.540 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:42.540 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:42.540 21:24:32 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:42.540 21:24:32 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:42.540 21:24:32 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:42.801 21:24:32 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:42.801 21:24:32 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:42.801 21:24:32 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.801 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:42.801 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:42.801 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:42.801 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:42.801 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:43.060 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:43.060 { 00:18:43.060 "name": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:43.060 "aliases": [ 00:18:43.060 "lvs/nvme0n1p0" 00:18:43.060 ], 00:18:43.060 "product_name": "Logical Volume", 00:18:43.060 "block_size": 4096, 00:18:43.060 "num_blocks": 26476544, 00:18:43.060 "uuid": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:43.060 "assigned_rate_limits": { 00:18:43.060 "rw_ios_per_sec": 0, 00:18:43.060 "rw_mbytes_per_sec": 0, 00:18:43.060 "r_mbytes_per_sec": 0, 00:18:43.060 "w_mbytes_per_sec": 0 00:18:43.060 }, 00:18:43.060 "claimed": false, 00:18:43.060 "zoned": false, 00:18:43.060 "supported_io_types": { 00:18:43.060 "read": true, 00:18:43.060 "write": true, 00:18:43.060 "unmap": true, 00:18:43.060 "flush": false, 00:18:43.060 "reset": true, 00:18:43.060 "nvme_admin": false, 00:18:43.060 "nvme_io": false, 00:18:43.060 "nvme_io_md": false, 00:18:43.060 "write_zeroes": true, 00:18:43.060 "zcopy": false, 00:18:43.060 "get_zone_info": false, 00:18:43.060 "zone_management": false, 00:18:43.060 "zone_append": false, 00:18:43.060 "compare": false, 00:18:43.060 "compare_and_write": false, 00:18:43.060 "abort": false, 00:18:43.060 "seek_hole": true, 00:18:43.060 "seek_data": true, 00:18:43.060 "copy": false, 00:18:43.060 "nvme_iov_md": false 00:18:43.060 }, 00:18:43.060 "driver_specific": { 00:18:43.060 "lvol": { 00:18:43.060 "lvol_store_uuid": "9da97f88-1f6b-4134-a1b0-1cbc21e1f200", 00:18:43.060 "base_bdev": "nvme0n1", 00:18:43.060 "thin_provision": true, 00:18:43.060 "num_allocated_clusters": 0, 00:18:43.060 "snapshot": false, 00:18:43.060 "clone": false, 00:18:43.060 "esnap_clone": false 00:18:43.060 } 00:18:43.060 } 00:18:43.060 } 00:18:43.060 ]' 00:18:43.060 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:43.060 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:43.060 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:43.060 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:43.060 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:43.060 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:43.060 21:24:32 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:43.060 21:24:32 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:43.319 21:24:32 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:43.319 21:24:32 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:43.319 21:24:32 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:43.319 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:43.319 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:43.319 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:43.319 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:43.319 21:24:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bff26da-0944-4a06-81ee-c5aab15b4487 00:18:43.577 21:24:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:43.577 { 00:18:43.577 "name": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:43.577 "aliases": [ 00:18:43.577 "lvs/nvme0n1p0" 00:18:43.577 ], 00:18:43.577 "product_name": "Logical Volume", 00:18:43.577 "block_size": 4096, 00:18:43.577 "num_blocks": 26476544, 00:18:43.577 "uuid": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:43.577 "assigned_rate_limits": { 00:18:43.577 "rw_ios_per_sec": 0, 00:18:43.577 "rw_mbytes_per_sec": 0, 00:18:43.577 "r_mbytes_per_sec": 0, 00:18:43.577 "w_mbytes_per_sec": 0 00:18:43.577 }, 00:18:43.577 "claimed": false, 00:18:43.577 "zoned": false, 00:18:43.578 "supported_io_types": { 00:18:43.578 "read": true, 00:18:43.578 "write": true, 00:18:43.578 "unmap": true, 00:18:43.578 "flush": false, 00:18:43.578 "reset": true, 00:18:43.578 "nvme_admin": false, 00:18:43.578 "nvme_io": false, 00:18:43.578 "nvme_io_md": false, 00:18:43.578 "write_zeroes": true, 00:18:43.578 "zcopy": false, 00:18:43.578 "get_zone_info": false, 00:18:43.578 "zone_management": false, 00:18:43.578 "zone_append": false, 00:18:43.578 "compare": false, 00:18:43.578 "compare_and_write": false, 00:18:43.578 "abort": false, 00:18:43.578 "seek_hole": true, 00:18:43.578 "seek_data": true, 00:18:43.578 "copy": false, 00:18:43.578 "nvme_iov_md": false 00:18:43.578 }, 00:18:43.578 "driver_specific": { 00:18:43.578 "lvol": { 00:18:43.578 "lvol_store_uuid": "9da97f88-1f6b-4134-a1b0-1cbc21e1f200", 00:18:43.578 "base_bdev": "nvme0n1", 00:18:43.578 "thin_provision": true, 00:18:43.578 "num_allocated_clusters": 0, 00:18:43.578 "snapshot": false, 00:18:43.578 "clone": false, 00:18:43.578 "esnap_clone": false 00:18:43.578 } 00:18:43.578 } 00:18:43.578 } 00:18:43.578 ]' 00:18:43.578 21:24:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:43.578 21:24:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:43.578 21:24:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:43.578 21:24:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:43.578 21:24:33 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:43.578 21:24:33 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:43.578 21:24:33 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:43.578 21:24:33 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1bff26da-0944-4a06-81ee-c5aab15b4487 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:43.837 [2024-12-16 21:24:33.372510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.372562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.837 [2024-12-16 21:24:33.372576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:43.837 [2024-12-16 21:24:33.372589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.375083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.375238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.837 [2024-12-16 21:24:33.375255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:18:43.837 [2024-12-16 21:24:33.375279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.375448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.837 [2024-12-16 21:24:33.375740] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.837 [2024-12-16 21:24:33.375764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.375774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.837 [2024-12-16 21:24:33.375786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:18:43.837 [2024-12-16 21:24:33.375796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.376072] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ed315229-44e7-4b17-bfcd-321e68a18dd7 00:18:43.837 [2024-12-16 21:24:33.377472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.377509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:43.837 [2024-12-16 21:24:33.377522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:43.837 [2024-12-16 21:24:33.377531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.384740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.384770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.837 [2024-12-16 21:24:33.384781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.120 ms 00:18:43.837 [2024-12-16 21:24:33.384790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.384915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.384925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.837 [2024-12-16 21:24:33.384935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:43.837 [2024-12-16 21:24:33.384944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.384997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.385006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.837 [2024-12-16 21:24:33.385017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:43.837 [2024-12-16 21:24:33.385035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.385071] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:43.837 [2024-12-16 21:24:33.386836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.386874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.837 [2024-12-16 21:24:33.386885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.775 ms 00:18:43.837 [2024-12-16 21:24:33.386895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.386955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.837 [2024-12-16 21:24:33.386967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.837 [2024-12-16 21:24:33.386975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:43.837 [2024-12-16 21:24:33.386986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.837 [2024-12-16 21:24:33.387024] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:43.837 [2024-12-16 21:24:33.387193] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.837 [2024-12-16 21:24:33.387205] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.837 [2024-12-16 21:24:33.387218] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:43.838 [2024-12-16 21:24:33.387238] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387248] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387265] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:43.838 [2024-12-16 21:24:33.387274] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.838 [2024-12-16 21:24:33.387282] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.838 [2024-12-16 21:24:33.387301] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.838 [2024-12-16 21:24:33.387310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.838 [2024-12-16 21:24:33.387329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.838 [2024-12-16 21:24:33.387337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:18:43.838 [2024-12-16 21:24:33.387355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.838 [2024-12-16 21:24:33.387454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.838 [2024-12-16 21:24:33.387465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.838 [2024-12-16 21:24:33.387474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:43.838 [2024-12-16 21:24:33.387482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.838 [2024-12-16 21:24:33.387616] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.838 [2024-12-16 21:24:33.387642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.838 [2024-12-16 21:24:33.387652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.838 [2024-12-16 21:24:33.387681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.838 [2024-12-16 21:24:33.387708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.838 [2024-12-16 21:24:33.387724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.838 [2024-12-16 21:24:33.387733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:43.838 [2024-12-16 21:24:33.387741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.838 [2024-12-16 21:24:33.387752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.838 [2024-12-16 21:24:33.387760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:43.838 [2024-12-16 21:24:33.387769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.838 [2024-12-16 21:24:33.387788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.838 [2024-12-16 21:24:33.387813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.838 [2024-12-16 21:24:33.387851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.838 [2024-12-16 21:24:33.387875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.838 [2024-12-16 21:24:33.387899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.838 [2024-12-16 21:24:33.387914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.838 [2024-12-16 21:24:33.387920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.838 [2024-12-16 21:24:33.387935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.838 [2024-12-16 21:24:33.387943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:43.838 [2024-12-16 21:24:33.387949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.838 [2024-12-16 21:24:33.387957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.838 [2024-12-16 21:24:33.387966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:43.838 [2024-12-16 21:24:33.387976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.838 [2024-12-16 21:24:33.387982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.838 [2024-12-16 21:24:33.387991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:43.838 [2024-12-16 21:24:33.387997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.838 [2024-12-16 21:24:33.388007] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.838 [2024-12-16 21:24:33.388015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.838 [2024-12-16 21:24:33.388025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.838 [2024-12-16 21:24:33.388032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.838 [2024-12-16 21:24:33.388041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.838 [2024-12-16 21:24:33.388048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.838 [2024-12-16 21:24:33.388056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.838 [2024-12-16 21:24:33.388062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.838 [2024-12-16 21:24:33.388070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.838 [2024-12-16 21:24:33.388077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.838 [2024-12-16 21:24:33.388087] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.838 [2024-12-16 21:24:33.388095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.838 [2024-12-16 21:24:33.388106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:43.838 [2024-12-16 21:24:33.388113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:43.838 [2024-12-16 21:24:33.388121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:43.838 [2024-12-16 21:24:33.388128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:43.838 [2024-12-16 21:24:33.388137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:43.838 [2024-12-16 21:24:33.388144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:43.838 [2024-12-16 21:24:33.388155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:43.838 [2024-12-16 21:24:33.388162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:43.838 [2024-12-16 21:24:33.388170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:43.838 [2024-12-16 21:24:33.388177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:43.838 [2024-12-16 21:24:33.388186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:43.838 [2024-12-16 21:24:33.388192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:43.838 [2024-12-16 21:24:33.388201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:43.838 [2024-12-16 21:24:33.388209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:43.838 [2024-12-16 21:24:33.388219] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.838 [2024-12-16 21:24:33.388228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.838 [2024-12-16 21:24:33.388241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.838 [2024-12-16 21:24:33.388248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.838 [2024-12-16 21:24:33.388257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.838 [2024-12-16 21:24:33.388264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.838 [2024-12-16 21:24:33.388273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.838 [2024-12-16 21:24:33.388280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.838 [2024-12-16 21:24:33.388291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:18:43.838 [2024-12-16 21:24:33.388298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.838 [2024-12-16 21:24:33.388373] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:43.838 [2024-12-16 21:24:33.388382] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:46.367 [2024-12-16 21:24:35.943165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.943217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:46.367 [2024-12-16 21:24:35.943233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2554.778 ms 00:18:46.367 [2024-12-16 21:24:35.943245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.951809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.951973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.367 [2024-12-16 21:24:35.951995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.470 ms 00:18:46.367 [2024-12-16 21:24:35.952004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.952142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.952153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.367 [2024-12-16 21:24:35.952164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:46.367 [2024-12-16 21:24:35.952174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.970200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.970253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.367 [2024-12-16 21:24:35.970275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.991 ms 00:18:46.367 [2024-12-16 21:24:35.970288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.970447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.970467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.367 [2024-12-16 21:24:35.970487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:46.367 [2024-12-16 21:24:35.970498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.970899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.970923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.367 [2024-12-16 21:24:35.970942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:18:46.367 [2024-12-16 21:24:35.970956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.971163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.971248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.367 [2024-12-16 21:24:35.971269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:46.367 [2024-12-16 21:24:35.971288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.981360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:35.981406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.367 [2024-12-16 21:24:35.981423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.025 ms 00:18:46.367 [2024-12-16 21:24:35.981432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.367 [2024-12-16 21:24:35.990490] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:46.367 [2024-12-16 21:24:36.007958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.367 [2024-12-16 21:24:36.007995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:46.367 [2024-12-16 21:24:36.008007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.376 ms 00:18:46.367 [2024-12-16 21:24:36.008017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.070126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.070172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:46.626 [2024-12-16 21:24:36.070185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.029 ms 00:18:46.626 [2024-12-16 21:24:36.070199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.070396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.070410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:46.626 [2024-12-16 21:24:36.070421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:18:46.626 [2024-12-16 21:24:36.070430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.073793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.073830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:46.626 [2024-12-16 21:24:36.073841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:18:46.626 [2024-12-16 21:24:36.073851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.076491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.076525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:46.626 [2024-12-16 21:24:36.076535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:18:46.626 [2024-12-16 21:24:36.076544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.076898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.076919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.626 [2024-12-16 21:24:36.076929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:46.626 [2024-12-16 21:24:36.076940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.109765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.109807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:46.626 [2024-12-16 21:24:36.109831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.761 ms 00:18:46.626 [2024-12-16 21:24:36.109843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.114195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.114235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:46.626 [2024-12-16 21:24:36.114246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.276 ms 00:18:46.626 [2024-12-16 21:24:36.114257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.117469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.117502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:46.626 [2024-12-16 21:24:36.117512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:18:46.626 [2024-12-16 21:24:36.117522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.125263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.125377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:46.626 [2024-12-16 21:24:36.125412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.691 ms 00:18:46.626 [2024-12-16 21:24:36.125445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.125575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.125609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:46.626 [2024-12-16 21:24:36.125675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:46.626 [2024-12-16 21:24:36.125704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.125894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.626 [2024-12-16 21:24:36.125969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:46.626 [2024-12-16 21:24:36.125997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:46.626 [2024-12-16 21:24:36.126027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.626 [2024-12-16 21:24:36.128133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.626 [2024-12-16 21:24:36.131371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2754.769 ms, result 0 00:18:46.626 [2024-12-16 21:24:36.132905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.626 { 00:18:46.626 "name": "ftl0", 00:18:46.626 "uuid": "ed315229-44e7-4b17-bfcd-321e68a18dd7" 00:18:46.626 } 00:18:46.626 21:24:36 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:46.626 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:46.627 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:46.627 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:18:46.627 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:46.627 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:46.627 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:46.885 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:46.885 [ 00:18:46.885 { 00:18:46.885 "name": "ftl0", 00:18:46.885 "aliases": [ 00:18:46.885 "ed315229-44e7-4b17-bfcd-321e68a18dd7" 00:18:46.885 ], 00:18:46.885 "product_name": "FTL disk", 00:18:46.885 "block_size": 4096, 00:18:46.885 "num_blocks": 23592960, 00:18:46.885 "uuid": "ed315229-44e7-4b17-bfcd-321e68a18dd7", 00:18:46.885 "assigned_rate_limits": { 00:18:46.885 "rw_ios_per_sec": 0, 00:18:46.885 "rw_mbytes_per_sec": 0, 00:18:46.885 "r_mbytes_per_sec": 0, 00:18:46.885 "w_mbytes_per_sec": 0 00:18:46.885 }, 00:18:46.885 "claimed": false, 00:18:46.885 "zoned": false, 00:18:46.885 "supported_io_types": { 00:18:46.885 "read": true, 00:18:46.885 "write": true, 00:18:46.885 "unmap": true, 00:18:46.885 "flush": true, 00:18:46.885 "reset": false, 00:18:46.885 "nvme_admin": false, 00:18:46.885 "nvme_io": false, 00:18:46.885 "nvme_io_md": false, 00:18:46.885 "write_zeroes": true, 00:18:46.885 "zcopy": false, 00:18:46.885 "get_zone_info": false, 00:18:46.885 "zone_management": false, 00:18:46.885 "zone_append": false, 00:18:46.885 "compare": false, 00:18:46.885 "compare_and_write": false, 00:18:46.885 "abort": false, 00:18:46.885 "seek_hole": false, 00:18:46.885 "seek_data": false, 00:18:46.885 "copy": false, 00:18:46.885 "nvme_iov_md": false 00:18:46.885 }, 00:18:46.885 "driver_specific": { 00:18:46.885 "ftl": { 00:18:46.885 "base_bdev": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:46.885 "cache": "nvc0n1p0" 00:18:46.885 } 00:18:46.885 } 00:18:46.885 } 00:18:46.885 ] 00:18:46.885 21:24:36 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:18:46.885 21:24:36 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:46.885 21:24:36 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:47.171 21:24:36 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:47.171 21:24:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:47.434 21:24:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:47.434 { 00:18:47.434 "name": "ftl0", 00:18:47.434 "aliases": [ 00:18:47.434 "ed315229-44e7-4b17-bfcd-321e68a18dd7" 00:18:47.434 ], 00:18:47.434 "product_name": "FTL disk", 00:18:47.434 "block_size": 4096, 00:18:47.434 "num_blocks": 23592960, 00:18:47.434 "uuid": "ed315229-44e7-4b17-bfcd-321e68a18dd7", 00:18:47.434 "assigned_rate_limits": { 00:18:47.434 "rw_ios_per_sec": 0, 00:18:47.434 "rw_mbytes_per_sec": 0, 00:18:47.434 "r_mbytes_per_sec": 0, 00:18:47.434 "w_mbytes_per_sec": 0 00:18:47.434 }, 00:18:47.434 "claimed": false, 00:18:47.434 "zoned": false, 00:18:47.434 "supported_io_types": { 00:18:47.434 "read": true, 00:18:47.434 "write": true, 00:18:47.434 "unmap": true, 00:18:47.434 "flush": true, 00:18:47.434 "reset": false, 00:18:47.434 "nvme_admin": false, 00:18:47.434 "nvme_io": false, 00:18:47.434 "nvme_io_md": false, 00:18:47.434 "write_zeroes": true, 00:18:47.434 "zcopy": false, 00:18:47.434 "get_zone_info": false, 00:18:47.434 "zone_management": false, 00:18:47.434 "zone_append": false, 00:18:47.434 "compare": false, 00:18:47.434 "compare_and_write": false, 00:18:47.434 "abort": false, 00:18:47.434 "seek_hole": false, 00:18:47.434 "seek_data": false, 00:18:47.434 "copy": false, 00:18:47.434 "nvme_iov_md": false 00:18:47.434 }, 00:18:47.434 "driver_specific": { 00:18:47.434 "ftl": { 00:18:47.434 "base_bdev": "1bff26da-0944-4a06-81ee-c5aab15b4487", 00:18:47.434 "cache": "nvc0n1p0" 00:18:47.434 } 00:18:47.434 } 00:18:47.434 } 00:18:47.434 ]' 00:18:47.434 21:24:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:47.434 21:24:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:47.434 21:24:36 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:47.694 [2024-12-16 21:24:37.167018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.694 [2024-12-16 21:24:37.167148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.694 [2024-12-16 21:24:37.167170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.694 [2024-12-16 21:24:37.167178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.694 [2024-12-16 21:24:37.167227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:47.694 [2024-12-16 21:24:37.167694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.694 [2024-12-16 21:24:37.167713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.695 [2024-12-16 21:24:37.167734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:18:47.695 [2024-12-16 21:24:37.167743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.168321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.168357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.695 [2024-12-16 21:24:37.168366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:18:47.695 [2024-12-16 21:24:37.168374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.172028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.172052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:47.695 [2024-12-16 21:24:37.172062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:18:47.695 [2024-12-16 21:24:37.172072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.178980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.179014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:47.695 [2024-12-16 21:24:37.179024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.849 ms 00:18:47.695 [2024-12-16 21:24:37.179035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.180685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.180720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.695 [2024-12-16 21:24:37.180729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:18:47.695 [2024-12-16 21:24:37.180740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.185031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.185067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.695 [2024-12-16 21:24:37.185076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.245 ms 00:18:47.695 [2024-12-16 21:24:37.185087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.185283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.185295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.695 [2024-12-16 21:24:37.185303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:18:47.695 [2024-12-16 21:24:37.185312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.187077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.187110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:47.695 [2024-12-16 21:24:37.187118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:18:47.695 [2024-12-16 21:24:37.187129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.188665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.188722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:47.695 [2024-12-16 21:24:37.188730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:18:47.695 [2024-12-16 21:24:37.188739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.189921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.190041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.695 [2024-12-16 21:24:37.190055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:18:47.695 [2024-12-16 21:24:37.190064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.191340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.695 [2024-12-16 21:24:37.191376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.695 [2024-12-16 21:24:37.191385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:18:47.695 [2024-12-16 21:24:37.191393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.695 [2024-12-16 21:24:37.191446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.695 [2024-12-16 21:24:37.191464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.695 [2024-12-16 21:24:37.191912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.191999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.696 [2024-12-16 21:24:37.192324] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.696 [2024-12-16 21:24:37.192332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed315229-44e7-4b17-bfcd-321e68a18dd7 00:18:47.696 [2024-12-16 21:24:37.192341] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.696 [2024-12-16 21:24:37.192348] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.696 [2024-12-16 21:24:37.192360] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.696 [2024-12-16 21:24:37.192368] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.696 [2024-12-16 21:24:37.192376] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.696 [2024-12-16 21:24:37.192383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.696 [2024-12-16 21:24:37.192392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.696 [2024-12-16 21:24:37.192399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.696 [2024-12-16 21:24:37.192407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.696 [2024-12-16 21:24:37.192413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.696 [2024-12-16 21:24:37.192422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.696 [2024-12-16 21:24:37.192431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:18:47.696 [2024-12-16 21:24:37.192441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.194322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.696 [2024-12-16 21:24:37.194418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.696 [2024-12-16 21:24:37.194468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:18:47.696 [2024-12-16 21:24:37.194492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.194646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.696 [2024-12-16 21:24:37.194718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.696 [2024-12-16 21:24:37.194767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:47.696 [2024-12-16 21:24:37.194791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.200261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.200365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.696 [2024-12-16 21:24:37.200416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.200443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.200551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.200584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.696 [2024-12-16 21:24:37.200656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.200724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.200800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.200857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.696 [2024-12-16 21:24:37.200909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.200933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.200990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.201134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.696 [2024-12-16 21:24:37.201158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.201179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.210926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.211077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.696 [2024-12-16 21:24:37.211131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.211157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.219230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.219359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.696 [2024-12-16 21:24:37.219412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.219439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.219509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.219709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.696 [2024-12-16 21:24:37.219780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.219806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.219911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.696 [2024-12-16 21:24:37.219967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.696 [2024-12-16 21:24:37.220009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.696 [2024-12-16 21:24:37.220114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.696 [2024-12-16 21:24:37.220231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.697 [2024-12-16 21:24:37.220272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.697 [2024-12-16 21:24:37.220330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.697 [2024-12-16 21:24:37.220357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.697 [2024-12-16 21:24:37.220428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.697 [2024-12-16 21:24:37.220712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.697 [2024-12-16 21:24:37.220798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.697 [2024-12-16 21:24:37.220872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.697 [2024-12-16 21:24:37.220965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.697 [2024-12-16 21:24:37.221102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.697 [2024-12-16 21:24:37.221129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.697 [2024-12-16 21:24:37.221142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.697 [2024-12-16 21:24:37.221200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.697 [2024-12-16 21:24:37.221212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.697 [2024-12-16 21:24:37.221221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.697 [2024-12-16 21:24:37.221230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.697 [2024-12-16 21:24:37.221445] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.397 ms, result 0 00:18:47.697 true 00:18:47.697 21:24:37 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89116 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89116 ']' 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89116 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89116 00:18:47.697 killing process with pid 89116 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89116' 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89116 00:18:47.697 21:24:37 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89116 00:18:52.964 21:24:41 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:53.223 65536+0 records in 00:18:53.223 65536+0 records out 00:18:53.223 268435456 bytes (268 MB, 256 MiB) copied, 0.817203 s, 328 MB/s 00:18:53.223 21:24:42 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:53.223 [2024-12-16 21:24:42.812070] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:53.223 [2024-12-16 21:24:42.812655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89271 ] 00:18:53.483 [2024-12-16 21:24:42.959273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.483 [2024-12-16 21:24:42.978520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.483 [2024-12-16 21:24:43.069161] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:53.483 [2024-12-16 21:24:43.069411] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:53.745 [2024-12-16 21:24:43.225894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.745 [2024-12-16 21:24:43.225939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.745 [2024-12-16 21:24:43.225953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:53.746 [2024-12-16 21:24:43.225961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.228221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.228367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.746 [2024-12-16 21:24:43.228384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:18:53.746 [2024-12-16 21:24:43.228391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.228984] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.746 [2024-12-16 21:24:43.229371] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.746 [2024-12-16 21:24:43.229413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.229423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.746 [2024-12-16 21:24:43.229433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:18:53.746 [2024-12-16 21:24:43.229440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.230602] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:53.746 [2024-12-16 21:24:43.233280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.233315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:53.746 [2024-12-16 21:24:43.233329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:18:53.746 [2024-12-16 21:24:43.233336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.233396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.233406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:53.746 [2024-12-16 21:24:43.233414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:53.746 [2024-12-16 21:24:43.233420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.238504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.238534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.746 [2024-12-16 21:24:43.238544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.042 ms 00:18:53.746 [2024-12-16 21:24:43.238552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.238679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.238690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.746 [2024-12-16 21:24:43.238703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:53.746 [2024-12-16 21:24:43.238713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.238740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.238748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.746 [2024-12-16 21:24:43.238756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:53.746 [2024-12-16 21:24:43.238767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.238805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:53.746 [2024-12-16 21:24:43.240196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.240224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.746 [2024-12-16 21:24:43.240237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:18:53.746 [2024-12-16 21:24:43.240244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.240289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.240300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.746 [2024-12-16 21:24:43.240308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:53.746 [2024-12-16 21:24:43.240315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.240333] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:53.746 [2024-12-16 21:24:43.240351] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:53.746 [2024-12-16 21:24:43.240389] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:53.746 [2024-12-16 21:24:43.240406] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:53.746 [2024-12-16 21:24:43.240512] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.746 [2024-12-16 21:24:43.240522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.746 [2024-12-16 21:24:43.240532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:53.746 [2024-12-16 21:24:43.240542] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.746 [2024-12-16 21:24:43.240551] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.746 [2024-12-16 21:24:43.240559] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:53.746 [2024-12-16 21:24:43.240570] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.746 [2024-12-16 21:24:43.240576] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.746 [2024-12-16 21:24:43.240585] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.746 [2024-12-16 21:24:43.240594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.240601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.746 [2024-12-16 21:24:43.240609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:18:53.746 [2024-12-16 21:24:43.240616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.240720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.240730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.746 [2024-12-16 21:24:43.240737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:53.746 [2024-12-16 21:24:43.240744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.240849] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.746 [2024-12-16 21:24:43.240860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.746 [2024-12-16 21:24:43.240869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.746 [2024-12-16 21:24:43.240877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.746 [2024-12-16 21:24:43.240886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.746 [2024-12-16 21:24:43.240894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.746 [2024-12-16 21:24:43.240901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:53.746 [2024-12-16 21:24:43.240912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.746 [2024-12-16 21:24:43.240920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:53.746 [2024-12-16 21:24:43.240927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.746 [2024-12-16 21:24:43.240935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.746 [2024-12-16 21:24:43.240942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:53.746 [2024-12-16 21:24:43.240950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.746 [2024-12-16 21:24:43.240978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.746 [2024-12-16 21:24:43.240986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:53.746 [2024-12-16 21:24:43.240994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.746 [2024-12-16 21:24:43.241009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:53.746 [2024-12-16 21:24:43.241016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.746 [2024-12-16 21:24:43.241033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.746 [2024-12-16 21:24:43.241048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.746 [2024-12-16 21:24:43.241062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.746 [2024-12-16 21:24:43.241078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.746 [2024-12-16 21:24:43.241085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.746 [2024-12-16 21:24:43.241100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.746 [2024-12-16 21:24:43.241107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.746 [2024-12-16 21:24:43.241122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.746 [2024-12-16 21:24:43.241129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.746 [2024-12-16 21:24:43.241144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.746 [2024-12-16 21:24:43.241152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:53.746 [2024-12-16 21:24:43.241159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.746 [2024-12-16 21:24:43.241166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.746 [2024-12-16 21:24:43.241174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:53.746 [2024-12-16 21:24:43.241183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.746 [2024-12-16 21:24:43.241198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:53.746 [2024-12-16 21:24:43.241206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241214] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.746 [2024-12-16 21:24:43.241222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.746 [2024-12-16 21:24:43.241229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.746 [2024-12-16 21:24:43.241236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.746 [2024-12-16 21:24:43.241243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.746 [2024-12-16 21:24:43.241249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.746 [2024-12-16 21:24:43.241256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.746 [2024-12-16 21:24:43.241262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.746 [2024-12-16 21:24:43.241268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.746 [2024-12-16 21:24:43.241277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.746 [2024-12-16 21:24:43.241285] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.746 [2024-12-16 21:24:43.241294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.746 [2024-12-16 21:24:43.241306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:53.746 [2024-12-16 21:24:43.241313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:53.746 [2024-12-16 21:24:43.241320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:53.746 [2024-12-16 21:24:43.241327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:53.746 [2024-12-16 21:24:43.241334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:53.746 [2024-12-16 21:24:43.241342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:53.746 [2024-12-16 21:24:43.241349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:53.746 [2024-12-16 21:24:43.241360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:53.746 [2024-12-16 21:24:43.241367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:53.746 [2024-12-16 21:24:43.241374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:53.746 [2024-12-16 21:24:43.241381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:53.746 [2024-12-16 21:24:43.241388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:53.746 [2024-12-16 21:24:43.241395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:53.746 [2024-12-16 21:24:43.241402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:53.746 [2024-12-16 21:24:43.241410] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.746 [2024-12-16 21:24:43.241420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.746 [2024-12-16 21:24:43.241430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.746 [2024-12-16 21:24:43.241438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.746 [2024-12-16 21:24:43.241445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.746 [2024-12-16 21:24:43.241452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.746 [2024-12-16 21:24:43.241459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.241466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.746 [2024-12-16 21:24:43.241474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:18:53.746 [2024-12-16 21:24:43.241484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.746 [2024-12-16 21:24:43.250512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.746 [2024-12-16 21:24:43.250691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.747 [2024-12-16 21:24:43.250713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.977 ms 00:18:53.747 [2024-12-16 21:24:43.250724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.250842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.250852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:53.747 [2024-12-16 21:24:43.250864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:53.747 [2024-12-16 21:24:43.250874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.274045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.274116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.747 [2024-12-16 21:24:43.274140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.143 ms 00:18:53.747 [2024-12-16 21:24:43.274162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.274306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.274338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.747 [2024-12-16 21:24:43.274356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:53.747 [2024-12-16 21:24:43.274371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.274847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.274875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.747 [2024-12-16 21:24:43.274896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:18:53.747 [2024-12-16 21:24:43.274913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.275163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.275207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.747 [2024-12-16 21:24:43.275224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:18:53.747 [2024-12-16 21:24:43.275245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.281654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.281685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.747 [2024-12-16 21:24:43.281694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.370 ms 00:18:53.747 [2024-12-16 21:24:43.281705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.284391] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:53.747 [2024-12-16 21:24:43.284432] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:53.747 [2024-12-16 21:24:43.284443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.284451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:53.747 [2024-12-16 21:24:43.284459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:18:53.747 [2024-12-16 21:24:43.284466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.299328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.299360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:53.747 [2024-12-16 21:24:43.299371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.801 ms 00:18:53.747 [2024-12-16 21:24:43.299384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.301656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.301688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:53.747 [2024-12-16 21:24:43.301696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:18:53.747 [2024-12-16 21:24:43.301703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.303597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.303656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:53.747 [2024-12-16 21:24:43.303665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:18:53.747 [2024-12-16 21:24:43.303671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.303994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.304008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:53.747 [2024-12-16 21:24:43.304016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:53.747 [2024-12-16 21:24:43.304026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.322236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.322274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:53.747 [2024-12-16 21:24:43.322284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.190 ms 00:18:53.747 [2024-12-16 21:24:43.322293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.329862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:53.747 [2024-12-16 21:24:43.344183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.344220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:53.747 [2024-12-16 21:24:43.344238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.796 ms 00:18:53.747 [2024-12-16 21:24:43.344246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.344317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.344327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:53.747 [2024-12-16 21:24:43.344342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:53.747 [2024-12-16 21:24:43.344350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.344394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.344402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:53.747 [2024-12-16 21:24:43.344410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:53.747 [2024-12-16 21:24:43.344417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.344440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.344448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:53.747 [2024-12-16 21:24:43.344456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:53.747 [2024-12-16 21:24:43.344465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.344496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:53.747 [2024-12-16 21:24:43.344505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.344513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:53.747 [2024-12-16 21:24:43.344523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:53.747 [2024-12-16 21:24:43.344533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.348210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.348366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:53.747 [2024-12-16 21:24:43.348383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:18:53.747 [2024-12-16 21:24:43.348391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.348480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.747 [2024-12-16 21:24:43.348490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:53.747 [2024-12-16 21:24:43.348499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:53.747 [2024-12-16 21:24:43.348506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.747 [2024-12-16 21:24:43.349729] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.747 [2024-12-16 21:24:43.350777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.521 ms, result 0 00:18:53.747 [2024-12-16 21:24:43.351929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:53.747 [2024-12-16 21:24:43.360993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.687  [2024-12-16T21:24:45.766Z] Copying: 17/256 [MB] (17 MBps) [2024-12-16T21:24:46.710Z] Copying: 43/256 [MB] (25 MBps) [2024-12-16T21:24:47.650Z] Copying: 64/256 [MB] (21 MBps) [2024-12-16T21:24:48.590Z] Copying: 81/256 [MB] (17 MBps) [2024-12-16T21:24:49.528Z] Copying: 97/256 [MB] (15 MBps) [2024-12-16T21:24:50.470Z] Copying: 113/256 [MB] (16 MBps) [2024-12-16T21:24:51.412Z] Copying: 128/256 [MB] (14 MBps) [2024-12-16T21:24:52.799Z] Copying: 147/256 [MB] (19 MBps) [2024-12-16T21:24:53.369Z] Copying: 161/256 [MB] (13 MBps) [2024-12-16T21:24:54.756Z] Copying: 175572/262144 [kB] (10128 kBps) [2024-12-16T21:24:55.695Z] Copying: 185680/262144 [kB] (10108 kBps) [2024-12-16T21:24:56.636Z] Copying: 195896/262144 [kB] (10216 kBps) [2024-12-16T21:24:57.570Z] Copying: 201/256 [MB] (10 MBps) [2024-12-16T21:24:58.512Z] Copying: 224/256 [MB] (22 MBps) [2024-12-16T21:24:59.449Z] Copying: 239/256 [MB] (14 MBps) [2024-12-16T21:24:59.449Z] Copying: 255/256 [MB] (16 MBps) [2024-12-16T21:24:59.449Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-16 21:24:59.397554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:09.749 [2024-12-16 21:24:59.398917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.398952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:09.749 [2024-12-16 21:24:59.398965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:09.749 [2024-12-16 21:24:59.398974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.398994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:09.749 [2024-12-16 21:24:59.399489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.399514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:09.749 [2024-12-16 21:24:59.399523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:19:09.749 [2024-12-16 21:24:59.399531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.401703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.401739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:09.749 [2024-12-16 21:24:59.401752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:19:09.749 [2024-12-16 21:24:59.401760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.409540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.409576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:09.749 [2024-12-16 21:24:59.409587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.763 ms 00:19:09.749 [2024-12-16 21:24:59.409595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.416462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.416604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:09.749 [2024-12-16 21:24:59.416620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:19:09.749 [2024-12-16 21:24:59.416644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.419328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.419363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:09.749 [2024-12-16 21:24:59.419373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:19:09.749 [2024-12-16 21:24:59.419380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.424720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.424856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:09.749 [2024-12-16 21:24:59.424916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.304 ms 00:19:09.749 [2024-12-16 21:24:59.424940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.425090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.425116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:09.749 [2024-12-16 21:24:59.425136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:09.749 [2024-12-16 21:24:59.425159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.428363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.428495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:09.749 [2024-12-16 21:24:59.428548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.175 ms 00:19:09.749 [2024-12-16 21:24:59.428570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.431248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.431373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:09.749 [2024-12-16 21:24:59.431428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:19:09.749 [2024-12-16 21:24:59.431450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.433543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.433682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:09.749 [2024-12-16 21:24:59.433733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.048 ms 00:19:09.749 [2024-12-16 21:24:59.433754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.436424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.749 [2024-12-16 21:24:59.436581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:09.749 [2024-12-16 21:24:59.436650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:19:09.749 [2024-12-16 21:24:59.436675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.749 [2024-12-16 21:24:59.436720] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:09.749 [2024-12-16 21:24:59.436749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.436780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.436809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.436837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.436906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.436936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.436991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:09.749 [2024-12-16 21:24:59.437845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.437884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.437912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.437941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.437969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.437997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.438990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:09.750 [2024-12-16 21:24:59.439997] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:09.750 [2024-12-16 21:24:59.440005] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed315229-44e7-4b17-bfcd-321e68a18dd7 00:19:09.750 [2024-12-16 21:24:59.440013] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:09.750 [2024-12-16 21:24:59.440020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:09.750 [2024-12-16 21:24:59.440027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:09.750 [2024-12-16 21:24:59.440035] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:09.750 [2024-12-16 21:24:59.440042] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:09.750 [2024-12-16 21:24:59.440049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:09.750 [2024-12-16 21:24:59.440062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:09.750 [2024-12-16 21:24:59.440068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:09.750 [2024-12-16 21:24:59.440075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:09.750 [2024-12-16 21:24:59.440083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.750 [2024-12-16 21:24:59.440091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:09.750 [2024-12-16 21:24:59.440100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.363 ms 00:19:09.750 [2024-12-16 21:24:59.440107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.750 [2024-12-16 21:24:59.441938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.750 [2024-12-16 21:24:59.441965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:09.750 [2024-12-16 21:24:59.441983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.806 ms 00:19:09.750 [2024-12-16 21:24:59.441991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.750 [2024-12-16 21:24:59.442107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.750 [2024-12-16 21:24:59.442121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:09.750 [2024-12-16 21:24:59.442131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:09.750 [2024-12-16 21:24:59.442139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.448524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.448568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.013 [2024-12-16 21:24:59.448578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.448590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.448669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.448679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.013 [2024-12-16 21:24:59.448687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.448695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.448737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.448746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.013 [2024-12-16 21:24:59.448754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.448761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.448781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.448789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.013 [2024-12-16 21:24:59.448796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.448803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.460072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.460116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.013 [2024-12-16 21:24:59.460126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.460140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.469101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.013 [2024-12-16 21:24:59.469112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.469120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.469159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.013 [2024-12-16 21:24:59.469167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.469175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.469219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.013 [2024-12-16 21:24:59.469227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.469235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.469313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.013 [2024-12-16 21:24:59.469327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.469341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.469389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:10.013 [2024-12-16 21:24:59.469399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.469407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.469470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.013 [2024-12-16 21:24:59.469478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.469485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.013 [2024-12-16 21:24:59.469546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.013 [2024-12-16 21:24:59.469554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.013 [2024-12-16 21:24:59.469561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.013 [2024-12-16 21:24:59.469936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.992 ms, result 0 00:19:10.275 00:19:10.275 00:19:10.275 21:24:59 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89456 00:19:10.275 21:24:59 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:10.275 21:24:59 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89456 00:19:10.275 21:24:59 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89456 ']' 00:19:10.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.275 21:24:59 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.275 21:24:59 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.275 21:24:59 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.275 21:24:59 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.275 21:24:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:10.275 [2024-12-16 21:24:59.882162] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:10.275 [2024-12-16 21:24:59.882552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89456 ] 00:19:10.536 [2024-12-16 21:25:00.034547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.536 [2024-12-16 21:25:00.069449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.108 21:25:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.108 21:25:00 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:11.108 21:25:00 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:11.370 [2024-12-16 21:25:00.914710] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:11.370 [2024-12-16 21:25:00.914785] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:11.644 [2024-12-16 21:25:01.088242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.644 [2024-12-16 21:25:01.088306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:11.645 [2024-12-16 21:25:01.088322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:11.645 [2024-12-16 21:25:01.088337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.090959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.091009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.645 [2024-12-16 21:25:01.091022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:19:11.645 [2024-12-16 21:25:01.091032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.091156] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:11.645 [2024-12-16 21:25:01.091433] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:11.645 [2024-12-16 21:25:01.091448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.091458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.645 [2024-12-16 21:25:01.091467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:11.645 [2024-12-16 21:25:01.091477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.093934] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:11.645 [2024-12-16 21:25:01.098009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.098057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:11.645 [2024-12-16 21:25:01.098072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.070 ms 00:19:11.645 [2024-12-16 21:25:01.098081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.098169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.098180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:11.645 [2024-12-16 21:25:01.098193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:11.645 [2024-12-16 21:25:01.098201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.106693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.106727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.645 [2024-12-16 21:25:01.106748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.432 ms 00:19:11.645 [2024-12-16 21:25:01.106755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.106896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.106908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.645 [2024-12-16 21:25:01.106923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:11.645 [2024-12-16 21:25:01.106931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.106957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.106968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:11.645 [2024-12-16 21:25:01.106979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:11.645 [2024-12-16 21:25:01.106986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.107014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:11.645 [2024-12-16 21:25:01.109108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.109149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.645 [2024-12-16 21:25:01.109164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:19:11.645 [2024-12-16 21:25:01.109174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.109218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.109229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:11.645 [2024-12-16 21:25:01.109237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:11.645 [2024-12-16 21:25:01.109247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.109268] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:11.645 [2024-12-16 21:25:01.109293] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:11.645 [2024-12-16 21:25:01.109333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:11.645 [2024-12-16 21:25:01.109350] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:11.645 [2024-12-16 21:25:01.109456] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:11.645 [2024-12-16 21:25:01.109468] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:11.645 [2024-12-16 21:25:01.109479] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:11.645 [2024-12-16 21:25:01.109498] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:11.645 [2024-12-16 21:25:01.109507] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:11.645 [2024-12-16 21:25:01.109519] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:11.645 [2024-12-16 21:25:01.109529] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:11.645 [2024-12-16 21:25:01.109541] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:11.645 [2024-12-16 21:25:01.109549] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:11.645 [2024-12-16 21:25:01.109559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.109569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:11.645 [2024-12-16 21:25:01.109578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:11.645 [2024-12-16 21:25:01.109586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.109691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.645 [2024-12-16 21:25:01.109703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:11.645 [2024-12-16 21:25:01.109713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:11.645 [2024-12-16 21:25:01.109720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.645 [2024-12-16 21:25:01.109825] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:11.645 [2024-12-16 21:25:01.109836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:11.645 [2024-12-16 21:25:01.109847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:11.645 [2024-12-16 21:25:01.109856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.645 [2024-12-16 21:25:01.109874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:11.645 [2024-12-16 21:25:01.109883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:11.645 [2024-12-16 21:25:01.109894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:11.645 [2024-12-16 21:25:01.109902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:11.645 [2024-12-16 21:25:01.109913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:11.645 [2024-12-16 21:25:01.109921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:11.645 [2024-12-16 21:25:01.109931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:11.645 [2024-12-16 21:25:01.109939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:11.645 [2024-12-16 21:25:01.109949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:11.645 [2024-12-16 21:25:01.109957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:11.645 [2024-12-16 21:25:01.109968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:11.645 [2024-12-16 21:25:01.109976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.645 [2024-12-16 21:25:01.109986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:11.645 [2024-12-16 21:25:01.109993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:11.645 [2024-12-16 21:25:01.110003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:11.645 [2024-12-16 21:25:01.110023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.645 [2024-12-16 21:25:01.110040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:11.645 [2024-12-16 21:25:01.110048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.645 [2024-12-16 21:25:01.110065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:11.645 [2024-12-16 21:25:01.110075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.645 [2024-12-16 21:25:01.110092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:11.645 [2024-12-16 21:25:01.110099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.645 [2024-12-16 21:25:01.110118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:11.645 [2024-12-16 21:25:01.110127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:11.645 [2024-12-16 21:25:01.110144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:11.645 [2024-12-16 21:25:01.110152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:11.645 [2024-12-16 21:25:01.110168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:11.645 [2024-12-16 21:25:01.110176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:11.645 [2024-12-16 21:25:01.110187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:11.645 [2024-12-16 21:25:01.110194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:11.645 [2024-12-16 21:25:01.110212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:11.645 [2024-12-16 21:25:01.110221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.645 [2024-12-16 21:25:01.110229] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:11.645 [2024-12-16 21:25:01.110240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:11.645 [2024-12-16 21:25:01.110252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:11.645 [2024-12-16 21:25:01.110262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.646 [2024-12-16 21:25:01.110270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:11.646 [2024-12-16 21:25:01.110279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:11.646 [2024-12-16 21:25:01.110286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:11.646 [2024-12-16 21:25:01.110295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:11.646 [2024-12-16 21:25:01.110302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:11.646 [2024-12-16 21:25:01.110313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:11.646 [2024-12-16 21:25:01.110322] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:11.646 [2024-12-16 21:25:01.110334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:11.646 [2024-12-16 21:25:01.110352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:11.646 [2024-12-16 21:25:01.110362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:11.646 [2024-12-16 21:25:01.110369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:11.646 [2024-12-16 21:25:01.110380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:11.646 [2024-12-16 21:25:01.110387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:11.646 [2024-12-16 21:25:01.110396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:11.646 [2024-12-16 21:25:01.110403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:11.646 [2024-12-16 21:25:01.110411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:11.646 [2024-12-16 21:25:01.110418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:11.646 [2024-12-16 21:25:01.110427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:11.646 [2024-12-16 21:25:01.110434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:11.646 [2024-12-16 21:25:01.110449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:11.646 [2024-12-16 21:25:01.110456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:11.646 [2024-12-16 21:25:01.110471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:11.646 [2024-12-16 21:25:01.110479] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:11.646 [2024-12-16 21:25:01.110493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:11.646 [2024-12-16 21:25:01.110501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:11.646 [2024-12-16 21:25:01.110511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:11.646 [2024-12-16 21:25:01.110518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:11.646 [2024-12-16 21:25:01.110528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:11.646 [2024-12-16 21:25:01.110536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.110550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:11.646 [2024-12-16 21:25:01.110557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:19:11.646 [2024-12-16 21:25:01.110567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.125002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.125042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.646 [2024-12-16 21:25:01.125053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.375 ms 00:19:11.646 [2024-12-16 21:25:01.125065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.125191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.125206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:11.646 [2024-12-16 21:25:01.125216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:11.646 [2024-12-16 21:25:01.125226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.137604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.137662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.646 [2024-12-16 21:25:01.137673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.350 ms 00:19:11.646 [2024-12-16 21:25:01.137685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.137753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.137766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.646 [2024-12-16 21:25:01.137774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:11.646 [2024-12-16 21:25:01.137784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.138305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.138345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.646 [2024-12-16 21:25:01.138356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:19:11.646 [2024-12-16 21:25:01.138367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.138526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.138547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.646 [2024-12-16 21:25:01.138557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:11.646 [2024-12-16 21:25:01.138567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.146715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.146755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.646 [2024-12-16 21:25:01.146765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.122 ms 00:19:11.646 [2024-12-16 21:25:01.146775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.159600] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:11.646 [2024-12-16 21:25:01.159672] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:11.646 [2024-12-16 21:25:01.159687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.159698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:11.646 [2024-12-16 21:25:01.159710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.813 ms 00:19:11.646 [2024-12-16 21:25:01.159721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.176137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.176189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:11.646 [2024-12-16 21:25:01.176202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.353 ms 00:19:11.646 [2024-12-16 21:25:01.176217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.179693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.179739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:11.646 [2024-12-16 21:25:01.179749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.371 ms 00:19:11.646 [2024-12-16 21:25:01.179759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.182714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.182764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:11.646 [2024-12-16 21:25:01.182774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:19:11.646 [2024-12-16 21:25:01.182784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.183128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.183143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:11.646 [2024-12-16 21:25:01.183153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:19:11.646 [2024-12-16 21:25:01.183163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.209839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.209890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:11.646 [2024-12-16 21:25:01.209902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.655 ms 00:19:11.646 [2024-12-16 21:25:01.209918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.218203] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:11.646 [2024-12-16 21:25:01.236572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.236617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:11.646 [2024-12-16 21:25:01.236645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.560 ms 00:19:11.646 [2024-12-16 21:25:01.236654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.236754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.236767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:11.646 [2024-12-16 21:25:01.236779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:11.646 [2024-12-16 21:25:01.236787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.236844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.236853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:11.646 [2024-12-16 21:25:01.236864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:11.646 [2024-12-16 21:25:01.236871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.236897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.646 [2024-12-16 21:25:01.236905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:11.646 [2024-12-16 21:25:01.236923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:11.646 [2024-12-16 21:25:01.236931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.646 [2024-12-16 21:25:01.236992] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:11.646 [2024-12-16 21:25:01.237003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.647 [2024-12-16 21:25:01.237013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:11.647 [2024-12-16 21:25:01.237020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:11.647 [2024-12-16 21:25:01.237030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.647 [2024-12-16 21:25:01.243187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.647 [2024-12-16 21:25:01.243237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:11.647 [2024-12-16 21:25:01.243250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.135 ms 00:19:11.647 [2024-12-16 21:25:01.243261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.647 [2024-12-16 21:25:01.243349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.647 [2024-12-16 21:25:01.243361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:11.647 [2024-12-16 21:25:01.243370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:11.647 [2024-12-16 21:25:01.243380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.647 [2024-12-16 21:25:01.244394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:11.647 [2024-12-16 21:25:01.245824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.856 ms, result 0 00:19:11.647 [2024-12-16 21:25:01.248057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:11.647 Some configs were skipped because the RPC state that can call them passed over. 00:19:11.647 21:25:01 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:11.938 [2024-12-16 21:25:01.473615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.938 [2024-12-16 21:25:01.473676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:11.938 [2024-12-16 21:25:01.473693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:19:11.938 [2024-12-16 21:25:01.473701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.938 [2024-12-16 21:25:01.473739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.291 ms, result 0 00:19:11.938 true 00:19:11.938 21:25:01 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:12.206 [2024-12-16 21:25:01.689541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.689599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:12.206 [2024-12-16 21:25:01.689611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:19:12.206 [2024-12-16 21:25:01.689620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.689677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.946 ms, result 0 00:19:12.206 true 00:19:12.206 21:25:01 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89456 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89456 ']' 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89456 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89456 00:19:12.206 killing process with pid 89456 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89456' 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89456 00:19:12.206 21:25:01 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89456 00:19:12.206 [2024-12-16 21:25:01.863044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.863098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.206 [2024-12-16 21:25:01.863113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.206 [2024-12-16 21:25:01.863121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.863148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:12.206 [2024-12-16 21:25:01.863667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.863702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.206 [2024-12-16 21:25:01.863711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:19:12.206 [2024-12-16 21:25:01.863720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.864023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.864041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.206 [2024-12-16 21:25:01.864051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:19:12.206 [2024-12-16 21:25:01.864065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.868559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.868600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.206 [2024-12-16 21:25:01.868609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.475 ms 00:19:12.206 [2024-12-16 21:25:01.868620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.875504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.875537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.206 [2024-12-16 21:25:01.875548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.838 ms 00:19:12.206 [2024-12-16 21:25:01.875561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.878167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.878210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.206 [2024-12-16 21:25:01.878219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.520 ms 00:19:12.206 [2024-12-16 21:25:01.878228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.882725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.206 [2024-12-16 21:25:01.882775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.206 [2024-12-16 21:25:01.882787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.460 ms 00:19:12.206 [2024-12-16 21:25:01.882797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.206 [2024-12-16 21:25:01.882933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.207 [2024-12-16 21:25:01.882945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.207 [2024-12-16 21:25:01.882954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:12.207 [2024-12-16 21:25:01.882965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.207 [2024-12-16 21:25:01.885745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.207 [2024-12-16 21:25:01.885784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.207 [2024-12-16 21:25:01.885793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:19:12.207 [2024-12-16 21:25:01.885805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.207 [2024-12-16 21:25:01.888356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.207 [2024-12-16 21:25:01.888395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.207 [2024-12-16 21:25:01.888403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.500 ms 00:19:12.207 [2024-12-16 21:25:01.888411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.207 [2024-12-16 21:25:01.890571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.207 [2024-12-16 21:25:01.890610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.207 [2024-12-16 21:25:01.890618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:19:12.207 [2024-12-16 21:25:01.890638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.207 [2024-12-16 21:25:01.892497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.207 [2024-12-16 21:25:01.892537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.207 [2024-12-16 21:25:01.892546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:19:12.207 [2024-12-16 21:25:01.892554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.207 [2024-12-16 21:25:01.892588] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.207 [2024-12-16 21:25:01.892604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.892992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.207 [2024-12-16 21:25:01.893269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.208 [2024-12-16 21:25:01.893475] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.208 [2024-12-16 21:25:01.893483] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed315229-44e7-4b17-bfcd-321e68a18dd7 00:19:12.208 [2024-12-16 21:25:01.893498] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.208 [2024-12-16 21:25:01.893505] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.208 [2024-12-16 21:25:01.893514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.208 [2024-12-16 21:25:01.893521] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.208 [2024-12-16 21:25:01.893534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.208 [2024-12-16 21:25:01.893542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.208 [2024-12-16 21:25:01.893550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.208 [2024-12-16 21:25:01.893557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.208 [2024-12-16 21:25:01.893564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.208 [2024-12-16 21:25:01.893571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.208 [2024-12-16 21:25:01.893580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.208 [2024-12-16 21:25:01.893588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:19:12.208 [2024-12-16 21:25:01.893599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.208 [2024-12-16 21:25:01.895255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.208 [2024-12-16 21:25:01.895290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.208 [2024-12-16 21:25:01.895299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:19:12.208 [2024-12-16 21:25:01.895308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.208 [2024-12-16 21:25:01.895401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.208 [2024-12-16 21:25:01.895412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.208 [2024-12-16 21:25:01.895420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:12.208 [2024-12-16 21:25:01.895431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.208 [2024-12-16 21:25:01.901461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.208 [2024-12-16 21:25:01.901506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.208 [2024-12-16 21:25:01.901516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.208 [2024-12-16 21:25:01.901525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.208 [2024-12-16 21:25:01.901609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.208 [2024-12-16 21:25:01.901621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.208 [2024-12-16 21:25:01.901643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.208 [2024-12-16 21:25:01.901657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.208 [2024-12-16 21:25:01.901699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.208 [2024-12-16 21:25:01.901711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.208 [2024-12-16 21:25:01.901718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.208 [2024-12-16 21:25:01.901727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.208 [2024-12-16 21:25:01.901746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.208 [2024-12-16 21:25:01.901755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.208 [2024-12-16 21:25:01.901762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.208 [2024-12-16 21:25:01.901771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.912537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.912585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.470 [2024-12-16 21:25:01.912600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.912615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.920854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.920901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.470 [2024-12-16 21:25:01.920912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.920925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.920995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.921008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.470 [2024-12-16 21:25:01.921016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.921026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.921063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.921073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.470 [2024-12-16 21:25:01.921080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.921089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.921155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.921168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.470 [2024-12-16 21:25:01.921176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.921185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.921217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.921228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.470 [2024-12-16 21:25:01.921236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.921247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.921287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.921300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.470 [2024-12-16 21:25:01.921307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.921316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.921361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.470 [2024-12-16 21:25:01.921381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.470 [2024-12-16 21:25:01.921390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.470 [2024-12-16 21:25:01.921399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.470 [2024-12-16 21:25:01.921536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.468 ms, result 0 00:19:12.470 21:25:02 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:12.470 21:25:02 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:12.731 [2024-12-16 21:25:02.171751] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:12.731 [2024-12-16 21:25:02.171904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89492 ] 00:19:12.731 [2024-12-16 21:25:02.315751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.731 [2024-12-16 21:25:02.335666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.993 [2024-12-16 21:25:02.436508] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.993 [2024-12-16 21:25:02.436588] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.993 [2024-12-16 21:25:02.598311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.598384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.993 [2024-12-16 21:25:02.598400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.993 [2024-12-16 21:25:02.598408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.601124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.601179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.993 [2024-12-16 21:25:02.601191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:19:12.993 [2024-12-16 21:25:02.601199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.601318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.993 [2024-12-16 21:25:02.602364] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.993 [2024-12-16 21:25:02.602424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.602436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.993 [2024-12-16 21:25:02.602446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:19:12.993 [2024-12-16 21:25:02.602455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.604402] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:12.993 [2024-12-16 21:25:02.608322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.608375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:12.993 [2024-12-16 21:25:02.608392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.923 ms 00:19:12.993 [2024-12-16 21:25:02.608401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.608490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.608502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:12.993 [2024-12-16 21:25:02.608512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:12.993 [2024-12-16 21:25:02.608520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.616817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.616864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.993 [2024-12-16 21:25:02.616875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.251 ms 00:19:12.993 [2024-12-16 21:25:02.616883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.617041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.617054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.993 [2024-12-16 21:25:02.617068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:12.993 [2024-12-16 21:25:02.617079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.617113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.617122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.993 [2024-12-16 21:25:02.617135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:12.993 [2024-12-16 21:25:02.617142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.617170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:12.993 [2024-12-16 21:25:02.619308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.619346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.993 [2024-12-16 21:25:02.619357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:19:12.993 [2024-12-16 21:25:02.619370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.619421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.619433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.993 [2024-12-16 21:25:02.619442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:12.993 [2024-12-16 21:25:02.619455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.619478] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:12.993 [2024-12-16 21:25:02.619500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:12.993 [2024-12-16 21:25:02.619545] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:12.993 [2024-12-16 21:25:02.619565] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:12.993 [2024-12-16 21:25:02.619688] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:12.993 [2024-12-16 21:25:02.619699] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.993 [2024-12-16 21:25:02.619711] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:12.993 [2024-12-16 21:25:02.619721] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.993 [2024-12-16 21:25:02.619731] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.993 [2024-12-16 21:25:02.619740] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:12.993 [2024-12-16 21:25:02.619749] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.993 [2024-12-16 21:25:02.619756] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:12.993 [2024-12-16 21:25:02.619764] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:12.993 [2024-12-16 21:25:02.619778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.619789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.993 [2024-12-16 21:25:02.619797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:19:12.993 [2024-12-16 21:25:02.619805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.619894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.993 [2024-12-16 21:25:02.619910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.993 [2024-12-16 21:25:02.619918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:12.993 [2024-12-16 21:25:02.619930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.993 [2024-12-16 21:25:02.620037] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.993 [2024-12-16 21:25:02.620058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.993 [2024-12-16 21:25:02.620070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.993 [2024-12-16 21:25:02.620083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.993 [2024-12-16 21:25:02.620092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.993 [2024-12-16 21:25:02.620100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.993 [2024-12-16 21:25:02.620108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:12.993 [2024-12-16 21:25:02.620120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.994 [2024-12-16 21:25:02.620129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.994 [2024-12-16 21:25:02.620144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.994 [2024-12-16 21:25:02.620152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:12.994 [2024-12-16 21:25:02.620162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.994 [2024-12-16 21:25:02.620170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.994 [2024-12-16 21:25:02.620179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:12.994 [2024-12-16 21:25:02.620186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.994 [2024-12-16 21:25:02.620203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:12.994 [2024-12-16 21:25:02.620210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.994 [2024-12-16 21:25:02.620227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.994 [2024-12-16 21:25:02.620242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.994 [2024-12-16 21:25:02.620255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.994 [2024-12-16 21:25:02.620270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.994 [2024-12-16 21:25:02.620277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.994 [2024-12-16 21:25:02.620293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.994 [2024-12-16 21:25:02.620301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.994 [2024-12-16 21:25:02.620317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.994 [2024-12-16 21:25:02.620325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.994 [2024-12-16 21:25:02.620340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.994 [2024-12-16 21:25:02.620347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:12.994 [2024-12-16 21:25:02.620355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.994 [2024-12-16 21:25:02.620363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:12.994 [2024-12-16 21:25:02.620370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:12.994 [2024-12-16 21:25:02.620380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:12.994 [2024-12-16 21:25:02.620396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:12.994 [2024-12-16 21:25:02.620403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.994 [2024-12-16 21:25:02.620426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.994 [2024-12-16 21:25:02.620437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.994 [2024-12-16 21:25:02.620447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.994 [2024-12-16 21:25:02.620455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.994 [2024-12-16 21:25:02.620462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.994 [2024-12-16 21:25:02.620469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.994 [2024-12-16 21:25:02.620476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.994 [2024-12-16 21:25:02.620484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.994 [2024-12-16 21:25:02.620490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.994 [2024-12-16 21:25:02.620498] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.994 [2024-12-16 21:25:02.620508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.994 [2024-12-16 21:25:02.620518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:12.994 [2024-12-16 21:25:02.620526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:12.994 [2024-12-16 21:25:02.620533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:12.994 [2024-12-16 21:25:02.620540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:12.994 [2024-12-16 21:25:02.620547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:12.994 [2024-12-16 21:25:02.620554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:12.994 [2024-12-16 21:25:02.620561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:12.994 [2024-12-16 21:25:02.620574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:12.994 [2024-12-16 21:25:02.620582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:12.994 [2024-12-16 21:25:02.620589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:12.994 [2024-12-16 21:25:02.620597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:12.994 [2024-12-16 21:25:02.620603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:12.994 [2024-12-16 21:25:02.620611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:12.994 [2024-12-16 21:25:02.620619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:12.994 [2024-12-16 21:25:02.620644] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.994 [2024-12-16 21:25:02.620657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.994 [2024-12-16 21:25:02.620669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.994 [2024-12-16 21:25:02.620677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.994 [2024-12-16 21:25:02.620684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.994 [2024-12-16 21:25:02.620691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.994 [2024-12-16 21:25:02.620699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.620711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.994 [2024-12-16 21:25:02.620720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:19:12.994 [2024-12-16 21:25:02.620731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.634567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.634612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.994 [2024-12-16 21:25:02.634644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.778 ms 00:19:12.994 [2024-12-16 21:25:02.634653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.634784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.634801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.994 [2024-12-16 21:25:02.634810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:12.994 [2024-12-16 21:25:02.634817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.658856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.658934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.994 [2024-12-16 21:25:02.658953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.007 ms 00:19:12.994 [2024-12-16 21:25:02.658966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.659107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.659126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.994 [2024-12-16 21:25:02.659140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.994 [2024-12-16 21:25:02.659152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.659727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.659774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.994 [2024-12-16 21:25:02.659791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:12.994 [2024-12-16 21:25:02.659813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.660029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.660047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.994 [2024-12-16 21:25:02.660060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:19:12.994 [2024-12-16 21:25:02.660071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.668355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.994 [2024-12-16 21:25:02.668398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.994 [2024-12-16 21:25:02.668409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.250 ms 00:19:12.994 [2024-12-16 21:25:02.668422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.994 [2024-12-16 21:25:02.672115] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:12.994 [2024-12-16 21:25:02.672167] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:12.994 [2024-12-16 21:25:02.672184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.995 [2024-12-16 21:25:02.672192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:12.995 [2024-12-16 21:25:02.672202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:19:12.995 [2024-12-16 21:25:02.672209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.995 [2024-12-16 21:25:02.688165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.995 [2024-12-16 21:25:02.688215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:12.995 [2024-12-16 21:25:02.688228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.892 ms 00:19:12.995 [2024-12-16 21:25:02.688237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.995 [2024-12-16 21:25:02.691659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.995 [2024-12-16 21:25:02.691706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:12.995 [2024-12-16 21:25:02.691717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:19:12.995 [2024-12-16 21:25:02.691724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.338 [2024-12-16 21:25:02.694147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.338 [2024-12-16 21:25:02.694208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:13.338 [2024-12-16 21:25:02.694218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:19:13.338 [2024-12-16 21:25:02.694225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.338 [2024-12-16 21:25:02.694577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.338 [2024-12-16 21:25:02.694608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.338 [2024-12-16 21:25:02.694618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:13.338 [2024-12-16 21:25:02.694645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.338 [2024-12-16 21:25:02.722356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.338 [2024-12-16 21:25:02.722421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:13.338 [2024-12-16 21:25:02.722435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.683 ms 00:19:13.338 [2024-12-16 21:25:02.722444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.338 [2024-12-16 21:25:02.731036] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:13.338 [2024-12-16 21:25:02.750991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.338 [2024-12-16 21:25:02.751056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.338 [2024-12-16 21:25:02.751069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.451 ms 00:19:13.338 [2024-12-16 21:25:02.751078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.338 [2024-12-16 21:25:02.751180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.338 [2024-12-16 21:25:02.751191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:13.339 [2024-12-16 21:25:02.751207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:13.339 [2024-12-16 21:25:02.751215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.339 [2024-12-16 21:25:02.751276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.339 [2024-12-16 21:25:02.751287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.339 [2024-12-16 21:25:02.751296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:13.339 [2024-12-16 21:25:02.751305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.339 [2024-12-16 21:25:02.751335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.339 [2024-12-16 21:25:02.751348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.339 [2024-12-16 21:25:02.751356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:13.339 [2024-12-16 21:25:02.751367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.339 [2024-12-16 21:25:02.751406] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:13.339 [2024-12-16 21:25:02.751416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.339 [2024-12-16 21:25:02.751426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:13.339 [2024-12-16 21:25:02.751434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:13.339 [2024-12-16 21:25:02.751442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.339 [2024-12-16 21:25:02.757564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.339 [2024-12-16 21:25:02.757612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:13.339 [2024-12-16 21:25:02.757653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:19:13.339 [2024-12-16 21:25:02.757662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.339 [2024-12-16 21:25:02.757757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.339 [2024-12-16 21:25:02.757768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:13.339 [2024-12-16 21:25:02.757778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:13.339 [2024-12-16 21:25:02.757786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.339 [2024-12-16 21:25:02.759815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:13.339 [2024-12-16 21:25:02.761238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.160 ms, result 0 00:19:13.339 [2024-12-16 21:25:02.762496] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.339 [2024-12-16 21:25:02.769938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.278  [2024-12-16T21:25:04.917Z] Copying: 18/256 [MB] (18 MBps) [2024-12-16T21:25:05.858Z] Copying: 42/256 [MB] (24 MBps) [2024-12-16T21:25:06.800Z] Copying: 58/256 [MB] (15 MBps) [2024-12-16T21:25:08.185Z] Copying: 76/256 [MB] (18 MBps) [2024-12-16T21:25:09.124Z] Copying: 89/256 [MB] (12 MBps) [2024-12-16T21:25:10.067Z] Copying: 109/256 [MB] (20 MBps) [2024-12-16T21:25:11.007Z] Copying: 132/256 [MB] (23 MBps) [2024-12-16T21:25:11.942Z] Copying: 145/256 [MB] (12 MBps) [2024-12-16T21:25:12.885Z] Copying: 166/256 [MB] (21 MBps) [2024-12-16T21:25:13.826Z] Copying: 184/256 [MB] (17 MBps) [2024-12-16T21:25:15.210Z] Copying: 206/256 [MB] (21 MBps) [2024-12-16T21:25:15.781Z] Copying: 225/256 [MB] (19 MBps) [2024-12-16T21:25:17.168Z] Copying: 243/256 [MB] (18 MBps) [2024-12-16T21:25:17.168Z] Copying: 255/256 [MB] (11 MBps) [2024-12-16T21:25:17.168Z] Copying: 256/256 [MB] (average 18 MBps)[2024-12-16 21:25:16.787099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.468 [2024-12-16 21:25:16.789013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.789060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.468 [2024-12-16 21:25:16.789080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.468 [2024-12-16 21:25:16.789089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.789112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:27.468 [2024-12-16 21:25:16.789801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.789834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.468 [2024-12-16 21:25:16.789845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:27.468 [2024-12-16 21:25:16.789855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.790122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.790139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.468 [2024-12-16 21:25:16.790153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:19:27.468 [2024-12-16 21:25:16.790161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.793881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.793904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.468 [2024-12-16 21:25:16.793913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.703 ms 00:19:27.468 [2024-12-16 21:25:16.793921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.800845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.800885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:27.468 [2024-12-16 21:25:16.800896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.883 ms 00:19:27.468 [2024-12-16 21:25:16.800910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.803519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.803568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.468 [2024-12-16 21:25:16.803579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.525 ms 00:19:27.468 [2024-12-16 21:25:16.803586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.809020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.809066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.468 [2024-12-16 21:25:16.809077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.376 ms 00:19:27.468 [2024-12-16 21:25:16.809084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.809230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.809241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.468 [2024-12-16 21:25:16.809250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:27.468 [2024-12-16 21:25:16.809261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.812772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.812815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:27.468 [2024-12-16 21:25:16.812824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.488 ms 00:19:27.468 [2024-12-16 21:25:16.812831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.815442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.815487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:27.468 [2024-12-16 21:25:16.815496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:19:27.468 [2024-12-16 21:25:16.815503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.817928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.817978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.468 [2024-12-16 21:25:16.817989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.385 ms 00:19:27.468 [2024-12-16 21:25:16.817997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.820193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-12-16 21:25:16.820241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.468 [2024-12-16 21:25:16.820250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:19:27.468 [2024-12-16 21:25:16.820257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-12-16 21:25:16.820295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.468 [2024-12-16 21:25:16.820311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.468 [2024-12-16 21:25:16.820424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.820998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.469 [2024-12-16 21:25:16.821123] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.469 [2024-12-16 21:25:16.821131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed315229-44e7-4b17-bfcd-321e68a18dd7 00:19:27.469 [2024-12-16 21:25:16.821139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.469 [2024-12-16 21:25:16.821147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.469 [2024-12-16 21:25:16.821154] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.469 [2024-12-16 21:25:16.821162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.469 [2024-12-16 21:25:16.821170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.469 [2024-12-16 21:25:16.821178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.470 [2024-12-16 21:25:16.821188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.470 [2024-12-16 21:25:16.821195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.470 [2024-12-16 21:25:16.821201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.470 [2024-12-16 21:25:16.821208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.470 [2024-12-16 21:25:16.821215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.470 [2024-12-16 21:25:16.821229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:19:27.470 [2024-12-16 21:25:16.821236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.823438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.470 [2024-12-16 21:25:16.823473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.470 [2024-12-16 21:25:16.823482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:19:27.470 [2024-12-16 21:25:16.823497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.823619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.470 [2024-12-16 21:25:16.823648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.470 [2024-12-16 21:25:16.823657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:27.470 [2024-12-16 21:25:16.823665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.831249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.831306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.470 [2024-12-16 21:25:16.831316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.831327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.831388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.831396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.470 [2024-12-16 21:25:16.831404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.831412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.831459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.831472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.470 [2024-12-16 21:25:16.831483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.831491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.831514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.831522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.470 [2024-12-16 21:25:16.831530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.831538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.845340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.845404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.470 [2024-12-16 21:25:16.845416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.845427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.856501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.856556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.470 [2024-12-16 21:25:16.856568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.856577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.856646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.856657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.470 [2024-12-16 21:25:16.856666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.856684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.856719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.856732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.470 [2024-12-16 21:25:16.856741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.856754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.856829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.856839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.470 [2024-12-16 21:25:16.856855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.856863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.856900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.856913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.470 [2024-12-16 21:25:16.856921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.856932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.857000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.857010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.470 [2024-12-16 21:25:16.857020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.857028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.857079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.470 [2024-12-16 21:25:16.857095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.470 [2024-12-16 21:25:16.857104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.470 [2024-12-16 21:25:16.857112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.470 [2024-12-16 21:25:16.857274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.232 ms, result 0 00:19:27.470 00:19:27.470 00:19:27.470 21:25:17 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:27.470 21:25:17 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:28.041 21:25:17 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:28.041 [2024-12-16 21:25:17.711022] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:28.041 [2024-12-16 21:25:17.711163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89653 ] 00:19:28.301 [2024-12-16 21:25:17.859706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.302 [2024-12-16 21:25:17.888174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.563 [2024-12-16 21:25:18.004982] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.563 [2024-12-16 21:25:18.005078] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.563 [2024-12-16 21:25:18.166269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.563 [2024-12-16 21:25:18.166333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:28.563 [2024-12-16 21:25:18.166348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:28.563 [2024-12-16 21:25:18.166357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.563 [2024-12-16 21:25:18.169041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.563 [2024-12-16 21:25:18.169101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.563 [2024-12-16 21:25:18.169115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:19:28.563 [2024-12-16 21:25:18.169123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.563 [2024-12-16 21:25:18.169241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:28.563 [2024-12-16 21:25:18.169859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:28.563 [2024-12-16 21:25:18.169917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.563 [2024-12-16 21:25:18.169928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.563 [2024-12-16 21:25:18.169938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:19:28.563 [2024-12-16 21:25:18.169952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.563 [2024-12-16 21:25:18.171759] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:28.563 [2024-12-16 21:25:18.175419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.175468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:28.564 [2024-12-16 21:25:18.175491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:19:28.564 [2024-12-16 21:25:18.175500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.175601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.175612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:28.564 [2024-12-16 21:25:18.175622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:28.564 [2024-12-16 21:25:18.175653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.183503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.183546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.564 [2024-12-16 21:25:18.183556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.802 ms 00:19:28.564 [2024-12-16 21:25:18.183564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.183725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.183738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.564 [2024-12-16 21:25:18.183747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:28.564 [2024-12-16 21:25:18.183762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.183793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.183802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:28.564 [2024-12-16 21:25:18.183810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:28.564 [2024-12-16 21:25:18.183818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.183840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:28.564 [2024-12-16 21:25:18.185917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.185950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.564 [2024-12-16 21:25:18.185959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:19:28.564 [2024-12-16 21:25:18.185972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.186021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.186037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:28.564 [2024-12-16 21:25:18.186049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:28.564 [2024-12-16 21:25:18.186056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.186074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:28.564 [2024-12-16 21:25:18.186096] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:28.564 [2024-12-16 21:25:18.186138] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:28.564 [2024-12-16 21:25:18.186157] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:28.564 [2024-12-16 21:25:18.186264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:28.564 [2024-12-16 21:25:18.186274] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:28.564 [2024-12-16 21:25:18.186285] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:28.564 [2024-12-16 21:25:18.186295] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186304] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186312] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:28.564 [2024-12-16 21:25:18.186321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:28.564 [2024-12-16 21:25:18.186329] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:28.564 [2024-12-16 21:25:18.186341] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:28.564 [2024-12-16 21:25:18.186354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.186362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:28.564 [2024-12-16 21:25:18.186370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:19:28.564 [2024-12-16 21:25:18.186381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.186475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.564 [2024-12-16 21:25:18.186484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:28.564 [2024-12-16 21:25:18.186492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:28.564 [2024-12-16 21:25:18.186499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.564 [2024-12-16 21:25:18.186607] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:28.564 [2024-12-16 21:25:18.186644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:28.564 [2024-12-16 21:25:18.186654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:28.564 [2024-12-16 21:25:18.186679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:28.564 [2024-12-16 21:25:18.186707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.564 [2024-12-16 21:25:18.186723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:28.564 [2024-12-16 21:25:18.186730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:28.564 [2024-12-16 21:25:18.186739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.564 [2024-12-16 21:25:18.186748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:28.564 [2024-12-16 21:25:18.186759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:28.564 [2024-12-16 21:25:18.186767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:28.564 [2024-12-16 21:25:18.186784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:28.564 [2024-12-16 21:25:18.186809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:28.564 [2024-12-16 21:25:18.186840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:28.564 [2024-12-16 21:25:18.186863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:28.564 [2024-12-16 21:25:18.186887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.564 [2024-12-16 21:25:18.186903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:28.564 [2024-12-16 21:25:18.186911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.564 [2024-12-16 21:25:18.186925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:28.564 [2024-12-16 21:25:18.186932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:28.564 [2024-12-16 21:25:18.186940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.564 [2024-12-16 21:25:18.186948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:28.564 [2024-12-16 21:25:18.186955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:28.564 [2024-12-16 21:25:18.186965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:28.564 [2024-12-16 21:25:18.186980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:28.564 [2024-12-16 21:25:18.186988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.564 [2024-12-16 21:25:18.186996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:28.564 [2024-12-16 21:25:18.187008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:28.564 [2024-12-16 21:25:18.187020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.564 [2024-12-16 21:25:18.187034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.564 [2024-12-16 21:25:18.187043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:28.564 [2024-12-16 21:25:18.187050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:28.564 [2024-12-16 21:25:18.187057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:28.564 [2024-12-16 21:25:18.187064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:28.564 [2024-12-16 21:25:18.187071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:28.564 [2024-12-16 21:25:18.187077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:28.564 [2024-12-16 21:25:18.187085] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:28.564 [2024-12-16 21:25:18.187094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.565 [2024-12-16 21:25:18.187105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:28.565 [2024-12-16 21:25:18.187113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:28.565 [2024-12-16 21:25:18.187120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:28.565 [2024-12-16 21:25:18.187127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:28.565 [2024-12-16 21:25:18.187133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:28.565 [2024-12-16 21:25:18.187141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:28.565 [2024-12-16 21:25:18.187147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:28.565 [2024-12-16 21:25:18.187162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:28.565 [2024-12-16 21:25:18.187176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:28.565 [2024-12-16 21:25:18.187183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:28.565 [2024-12-16 21:25:18.187190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:28.565 [2024-12-16 21:25:18.187198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:28.565 [2024-12-16 21:25:18.187205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:28.565 [2024-12-16 21:25:18.187212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:28.565 [2024-12-16 21:25:18.187219] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:28.565 [2024-12-16 21:25:18.187230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.565 [2024-12-16 21:25:18.187241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:28.565 [2024-12-16 21:25:18.187249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:28.565 [2024-12-16 21:25:18.187256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:28.565 [2024-12-16 21:25:18.187263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:28.565 [2024-12-16 21:25:18.187271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.187279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:28.565 [2024-12-16 21:25:18.187287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:19:28.565 [2024-12-16 21:25:18.187295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.201088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.201131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.565 [2024-12-16 21:25:18.201142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.740 ms 00:19:28.565 [2024-12-16 21:25:18.201151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.201286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.201304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:28.565 [2024-12-16 21:25:18.201314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:28.565 [2024-12-16 21:25:18.201322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.221969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.222033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.565 [2024-12-16 21:25:18.222048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.623 ms 00:19:28.565 [2024-12-16 21:25:18.222058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.222172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.222187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.565 [2024-12-16 21:25:18.222199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.565 [2024-12-16 21:25:18.222208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.222764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.222806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.565 [2024-12-16 21:25:18.222820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:28.565 [2024-12-16 21:25:18.222831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.223020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.223037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.565 [2024-12-16 21:25:18.223048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:28.565 [2024-12-16 21:25:18.223064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.231430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.231477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.565 [2024-12-16 21:25:18.231488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.335 ms 00:19:28.565 [2024-12-16 21:25:18.231507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.235332] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:28.565 [2024-12-16 21:25:18.235386] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:28.565 [2024-12-16 21:25:18.235399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.235407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:28.565 [2024-12-16 21:25:18.235416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.785 ms 00:19:28.565 [2024-12-16 21:25:18.235423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.251917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.251982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:28.565 [2024-12-16 21:25:18.252004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.431 ms 00:19:28.565 [2024-12-16 21:25:18.252016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.254926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.254972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:28.565 [2024-12-16 21:25:18.254981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:19:28.565 [2024-12-16 21:25:18.254988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.257445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.257497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:28.565 [2024-12-16 21:25:18.257507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:19:28.565 [2024-12-16 21:25:18.257514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.565 [2024-12-16 21:25:18.257875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.565 [2024-12-16 21:25:18.257895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:28.565 [2024-12-16 21:25:18.257905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:19:28.565 [2024-12-16 21:25:18.257913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.284299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.284359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:28.827 [2024-12-16 21:25:18.284373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.332 ms 00:19:28.827 [2024-12-16 21:25:18.284381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.292515] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:28.827 [2024-12-16 21:25:18.311978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.312045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:28.827 [2024-12-16 21:25:18.312058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.506 ms 00:19:28.827 [2024-12-16 21:25:18.312067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.312164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.312179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:28.827 [2024-12-16 21:25:18.312189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:28.827 [2024-12-16 21:25:18.312199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.312255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.312271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:28.827 [2024-12-16 21:25:18.312280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:28.827 [2024-12-16 21:25:18.312288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.312315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.312324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:28.827 [2024-12-16 21:25:18.312335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:28.827 [2024-12-16 21:25:18.312347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.312382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:28.827 [2024-12-16 21:25:18.312392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.312400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:28.827 [2024-12-16 21:25:18.312408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:28.827 [2024-12-16 21:25:18.312416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.318328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.318381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:28.827 [2024-12-16 21:25:18.318392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.893 ms 00:19:28.827 [2024-12-16 21:25:18.318408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.318500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.827 [2024-12-16 21:25:18.318511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:28.827 [2024-12-16 21:25:18.318520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:28.827 [2024-12-16 21:25:18.318529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.827 [2024-12-16 21:25:18.319848] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.827 [2024-12-16 21:25:18.321213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.231 ms, result 0 00:19:28.827 [2024-12-16 21:25:18.322510] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.827 [2024-12-16 21:25:18.329916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.090  [2024-12-16T21:25:18.790Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-12-16 21:25:18.594910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:29.090 [2024-12-16 21:25:18.595969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.596015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:29.090 [2024-12-16 21:25:18.596027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:29.090 [2024-12-16 21:25:18.596042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.596064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:29.090 [2024-12-16 21:25:18.596737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.596773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:29.090 [2024-12-16 21:25:18.596783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:19:29.090 [2024-12-16 21:25:18.596791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.598760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.598807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:29.090 [2024-12-16 21:25:18.598824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:19:29.090 [2024-12-16 21:25:18.598832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.603227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.603267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:29.090 [2024-12-16 21:25:18.603277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.379 ms 00:19:29.090 [2024-12-16 21:25:18.603285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.610168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.610213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:29.090 [2024-12-16 21:25:18.610231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:19:29.090 [2024-12-16 21:25:18.610238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.613269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.613318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:29.090 [2024-12-16 21:25:18.613327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:19:29.090 [2024-12-16 21:25:18.613335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.618789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.618837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:29.090 [2024-12-16 21:25:18.618857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.410 ms 00:19:29.090 [2024-12-16 21:25:18.618866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.618999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.619013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:29.090 [2024-12-16 21:25:18.619022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:29.090 [2024-12-16 21:25:18.619030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.622509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.622555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:29.090 [2024-12-16 21:25:18.622564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:19:29.090 [2024-12-16 21:25:18.622571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.625736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.625782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:29.090 [2024-12-16 21:25:18.625792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:19:29.090 [2024-12-16 21:25:18.625799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.627936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.627982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:29.090 [2024-12-16 21:25:18.627993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:19:29.090 [2024-12-16 21:25:18.628001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.630574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-12-16 21:25:18.630618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:29.090 [2024-12-16 21:25:18.630647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:19:29.090 [2024-12-16 21:25:18.630655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-12-16 21:25:18.630694] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:29.090 [2024-12-16 21:25:18.630709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:29.090 [2024-12-16 21:25:18.630949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.630957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.630964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.630971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.630979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.630987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.630994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:29.091 [2024-12-16 21:25:18.631483] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:29.091 [2024-12-16 21:25:18.631491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed315229-44e7-4b17-bfcd-321e68a18dd7 00:19:29.091 [2024-12-16 21:25:18.631499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:29.091 [2024-12-16 21:25:18.631507] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:29.091 [2024-12-16 21:25:18.631514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:29.091 [2024-12-16 21:25:18.631522] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:29.091 [2024-12-16 21:25:18.631535] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:29.091 [2024-12-16 21:25:18.631544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:29.091 [2024-12-16 21:25:18.631556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:29.091 [2024-12-16 21:25:18.631562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:29.091 [2024-12-16 21:25:18.631569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:29.091 [2024-12-16 21:25:18.631576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.091 [2024-12-16 21:25:18.631584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:29.091 [2024-12-16 21:25:18.631594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:19:29.091 [2024-12-16 21:25:18.631602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-12-16 21:25:18.633580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.091 [2024-12-16 21:25:18.633616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:29.091 [2024-12-16 21:25:18.633648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.960 ms 00:19:29.091 [2024-12-16 21:25:18.633657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-12-16 21:25:18.633785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.091 [2024-12-16 21:25:18.633795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:29.091 [2024-12-16 21:25:18.633804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:29.091 [2024-12-16 21:25:18.633811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-12-16 21:25:18.641376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.091 [2024-12-16 21:25:18.641421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.091 [2024-12-16 21:25:18.641437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.091 [2024-12-16 21:25:18.641450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.091 [2024-12-16 21:25:18.641507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.641515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.092 [2024-12-16 21:25:18.641524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.641531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.641577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.641587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.092 [2024-12-16 21:25:18.641595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.641606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.641658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.641667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.092 [2024-12-16 21:25:18.641676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.641688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.655301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.655353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.092 [2024-12-16 21:25:18.655370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.655379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.666546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.666598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.092 [2024-12-16 21:25:18.666609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.666616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.666719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.666731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.092 [2024-12-16 21:25:18.666741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.666749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.666785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.666800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.092 [2024-12-16 21:25:18.666809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.666817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.666892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.666903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.092 [2024-12-16 21:25:18.666920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.666927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.666965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.666975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.092 [2024-12-16 21:25:18.666988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.666996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.667044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.667055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.092 [2024-12-16 21:25:18.667065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.667073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.667124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.092 [2024-12-16 21:25:18.667147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.092 [2024-12-16 21:25:18.667157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.092 [2024-12-16 21:25:18.667167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.092 [2024-12-16 21:25:18.667324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.319 ms, result 0 00:19:29.353 00:19:29.353 00:19:29.353 21:25:18 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89677 00:19:29.353 21:25:18 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89677 00:19:29.353 21:25:18 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:29.353 21:25:18 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89677 ']' 00:19:29.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.353 21:25:18 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.353 21:25:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.353 21:25:18 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.353 21:25:18 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.353 21:25:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:29.353 [2024-12-16 21:25:18.981895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:29.353 [2024-12-16 21:25:18.982283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89677 ] 00:19:29.612 [2024-12-16 21:25:19.131096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.612 [2024-12-16 21:25:19.159515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.184 21:25:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.184 21:25:19 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:30.184 21:25:19 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:30.445 [2024-12-16 21:25:20.036101] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.445 [2024-12-16 21:25:20.036189] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.709 [2024-12-16 21:25:20.214481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.214554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.709 [2024-12-16 21:25:20.214570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.709 [2024-12-16 21:25:20.214581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.217124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.217179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.709 [2024-12-16 21:25:20.217190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.524 ms 00:19:30.709 [2024-12-16 21:25:20.217203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.217307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.709 [2024-12-16 21:25:20.218155] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.709 [2024-12-16 21:25:20.218210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.218225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.709 [2024-12-16 21:25:20.218236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:19:30.709 [2024-12-16 21:25:20.218246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.220008] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.709 [2024-12-16 21:25:20.223894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.223948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.709 [2024-12-16 21:25:20.223961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.883 ms 00:19:30.709 [2024-12-16 21:25:20.223970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.224056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.224067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.709 [2024-12-16 21:25:20.224081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:30.709 [2024-12-16 21:25:20.224089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.232096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.232137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.709 [2024-12-16 21:25:20.232149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.946 ms 00:19:30.709 [2024-12-16 21:25:20.232158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.232293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.232305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.709 [2024-12-16 21:25:20.232317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:30.709 [2024-12-16 21:25:20.232329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.232358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.232367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.709 [2024-12-16 21:25:20.232379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:30.709 [2024-12-16 21:25:20.232387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.232413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:30.709 [2024-12-16 21:25:20.234434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.234479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.709 [2024-12-16 21:25:20.234492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:19:30.709 [2024-12-16 21:25:20.234501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.234548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.234560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.709 [2024-12-16 21:25:20.234568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:30.709 [2024-12-16 21:25:20.234578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.234602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.709 [2024-12-16 21:25:20.234644] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.709 [2024-12-16 21:25:20.234685] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.709 [2024-12-16 21:25:20.234709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:30.709 [2024-12-16 21:25:20.234815] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.709 [2024-12-16 21:25:20.234829] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.709 [2024-12-16 21:25:20.234843] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:30.709 [2024-12-16 21:25:20.234857] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.709 [2024-12-16 21:25:20.234866] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.709 [2024-12-16 21:25:20.234878] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:30.709 [2024-12-16 21:25:20.234889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.709 [2024-12-16 21:25:20.234898] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.709 [2024-12-16 21:25:20.234909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.709 [2024-12-16 21:25:20.234919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.234926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.709 [2024-12-16 21:25:20.234937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:19:30.709 [2024-12-16 21:25:20.234945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.235035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.709 [2024-12-16 21:25:20.235051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.709 [2024-12-16 21:25:20.235061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:30.709 [2024-12-16 21:25:20.235069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.709 [2024-12-16 21:25:20.235175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.709 [2024-12-16 21:25:20.235192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.709 [2024-12-16 21:25:20.235203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.709 [2024-12-16 21:25:20.235214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.709 [2024-12-16 21:25:20.235227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.709 [2024-12-16 21:25:20.235235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.709 [2024-12-16 21:25:20.235244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:30.709 [2024-12-16 21:25:20.235253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.709 [2024-12-16 21:25:20.235263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:30.709 [2024-12-16 21:25:20.235271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.709 [2024-12-16 21:25:20.235281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.709 [2024-12-16 21:25:20.235290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:30.710 [2024-12-16 21:25:20.235300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.710 [2024-12-16 21:25:20.235308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.710 [2024-12-16 21:25:20.235317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:30.710 [2024-12-16 21:25:20.235326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.710 [2024-12-16 21:25:20.235344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:30.710 [2024-12-16 21:25:20.235353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.710 [2024-12-16 21:25:20.235374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.710 [2024-12-16 21:25:20.235391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.710 [2024-12-16 21:25:20.235399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.710 [2024-12-16 21:25:20.235417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.710 [2024-12-16 21:25:20.235428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.710 [2024-12-16 21:25:20.235446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.710 [2024-12-16 21:25:20.235453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.710 [2024-12-16 21:25:20.235469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.710 [2024-12-16 21:25:20.235479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.710 [2024-12-16 21:25:20.235496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.710 [2024-12-16 21:25:20.235504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:30.710 [2024-12-16 21:25:20.235515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.710 [2024-12-16 21:25:20.235523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.710 [2024-12-16 21:25:20.235533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:30.710 [2024-12-16 21:25:20.235541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.710 [2024-12-16 21:25:20.235559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:30.710 [2024-12-16 21:25:20.235569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235576] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.710 [2024-12-16 21:25:20.235585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.710 [2024-12-16 21:25:20.235593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.710 [2024-12-16 21:25:20.235602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.710 [2024-12-16 21:25:20.235610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.710 [2024-12-16 21:25:20.235618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.710 [2024-12-16 21:25:20.235647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.710 [2024-12-16 21:25:20.235657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.710 [2024-12-16 21:25:20.235666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.710 [2024-12-16 21:25:20.235678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.710 [2024-12-16 21:25:20.235687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.710 [2024-12-16 21:25:20.235699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.710 [2024-12-16 21:25:20.235709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:30.710 [2024-12-16 21:25:20.235718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:30.710 [2024-12-16 21:25:20.235726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:30.710 [2024-12-16 21:25:20.235736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:30.710 [2024-12-16 21:25:20.235745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:30.710 [2024-12-16 21:25:20.235755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:30.710 [2024-12-16 21:25:20.235762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:30.710 [2024-12-16 21:25:20.235772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:30.710 [2024-12-16 21:25:20.235780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:30.710 [2024-12-16 21:25:20.235789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:30.710 [2024-12-16 21:25:20.235796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:30.710 [2024-12-16 21:25:20.235812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:30.710 [2024-12-16 21:25:20.235820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:30.710 [2024-12-16 21:25:20.235832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:30.710 [2024-12-16 21:25:20.235839] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.710 [2024-12-16 21:25:20.235851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.710 [2024-12-16 21:25:20.235860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.710 [2024-12-16 21:25:20.235869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.710 [2024-12-16 21:25:20.235876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.710 [2024-12-16 21:25:20.235886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.710 [2024-12-16 21:25:20.235893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.235905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.710 [2024-12-16 21:25:20.235912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:19:30.710 [2024-12-16 21:25:20.235921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.249839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.249883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.710 [2024-12-16 21:25:20.249895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.859 ms 00:19:30.710 [2024-12-16 21:25:20.249904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.250034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.250050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.710 [2024-12-16 21:25:20.250060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:30.710 [2024-12-16 21:25:20.250070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.262539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.262587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.710 [2024-12-16 21:25:20.262598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:19:30.710 [2024-12-16 21:25:20.262611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.262702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.262715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.710 [2024-12-16 21:25:20.262725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.710 [2024-12-16 21:25:20.262739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.263260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.263305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.710 [2024-12-16 21:25:20.263320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:19:30.710 [2024-12-16 21:25:20.263331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.263488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.263503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.710 [2024-12-16 21:25:20.263513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:30.710 [2024-12-16 21:25:20.263526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.271766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.271814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.710 [2024-12-16 21:25:20.271825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.214 ms 00:19:30.710 [2024-12-16 21:25:20.271834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.710 [2024-12-16 21:25:20.284496] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:30.710 [2024-12-16 21:25:20.284563] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.710 [2024-12-16 21:25:20.284578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.710 [2024-12-16 21:25:20.284590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.711 [2024-12-16 21:25:20.284601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.643 ms 00:19:30.711 [2024-12-16 21:25:20.284611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.304127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.304185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.711 [2024-12-16 21:25:20.304198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.441 ms 00:19:30.711 [2024-12-16 21:25:20.304211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.307304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.307357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.711 [2024-12-16 21:25:20.307366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:19:30.711 [2024-12-16 21:25:20.307375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.309960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.310010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.711 [2024-12-16 21:25:20.310019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:19:30.711 [2024-12-16 21:25:20.310029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.310370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.310385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.711 [2024-12-16 21:25:20.310395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:30.711 [2024-12-16 21:25:20.310404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.337264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.337328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.711 [2024-12-16 21:25:20.337341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.837 ms 00:19:30.711 [2024-12-16 21:25:20.337354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.345742] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:30.711 [2024-12-16 21:25:20.364284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.364333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.711 [2024-12-16 21:25:20.364348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.830 ms 00:19:30.711 [2024-12-16 21:25:20.364356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.364446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.364461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.711 [2024-12-16 21:25:20.364473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:30.711 [2024-12-16 21:25:20.364481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.364539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.364552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.711 [2024-12-16 21:25:20.364563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:30.711 [2024-12-16 21:25:20.364570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.364598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.364607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.711 [2024-12-16 21:25:20.364651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:30.711 [2024-12-16 21:25:20.364660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.364698] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.711 [2024-12-16 21:25:20.364708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.364717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.711 [2024-12-16 21:25:20.364725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:30.711 [2024-12-16 21:25:20.364735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.370753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.370805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.711 [2024-12-16 21:25:20.370816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.995 ms 00:19:30.711 [2024-12-16 21:25:20.370830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.370919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.711 [2024-12-16 21:25:20.370932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.711 [2024-12-16 21:25:20.370941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:30.711 [2024-12-16 21:25:20.370951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.711 [2024-12-16 21:25:20.372160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.711 [2024-12-16 21:25:20.373522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.359 ms, result 0 00:19:30.711 [2024-12-16 21:25:20.375740] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.711 Some configs were skipped because the RPC state that can call them passed over. 00:19:30.972 21:25:20 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:30.972 [2024-12-16 21:25:20.609300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.972 [2024-12-16 21:25:20.609355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:30.972 [2024-12-16 21:25:20.609371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:19:30.972 [2024-12-16 21:25:20.609380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.972 [2024-12-16 21:25:20.609417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.279 ms, result 0 00:19:30.972 true 00:19:30.972 21:25:20 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:31.233 [2024-12-16 21:25:20.821367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.233 [2024-12-16 21:25:20.821427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:31.233 [2024-12-16 21:25:20.821440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:19:31.233 [2024-12-16 21:25:20.821449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.233 [2024-12-16 21:25:20.821487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.126 ms, result 0 00:19:31.233 true 00:19:31.233 21:25:20 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89677 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89677 ']' 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89677 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89677 00:19:31.233 killing process with pid 89677 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89677' 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89677 00:19:31.233 21:25:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89677 00:19:31.495 [2024-12-16 21:25:20.995100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:20.995171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:31.495 [2024-12-16 21:25:20.995187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.495 [2024-12-16 21:25:20.995196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:20.995224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:31.495 [2024-12-16 21:25:20.995908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:20.995960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:31.495 [2024-12-16 21:25:20.995974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:19:31.495 [2024-12-16 21:25:20.995985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:20.996285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:20.996299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:31.495 [2024-12-16 21:25:20.996309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:19:31.495 [2024-12-16 21:25:20.996319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.000890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.000936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:31.495 [2024-12-16 21:25:21.000947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:19:31.495 [2024-12-16 21:25:21.000959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.007871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.007915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:31.495 [2024-12-16 21:25:21.007925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.859 ms 00:19:31.495 [2024-12-16 21:25:21.007943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.010771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.010822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:31.495 [2024-12-16 21:25:21.010833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:19:31.495 [2024-12-16 21:25:21.010842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.016726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.016779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:31.495 [2024-12-16 21:25:21.016796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.837 ms 00:19:31.495 [2024-12-16 21:25:21.016809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.016953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.016987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:31.495 [2024-12-16 21:25:21.016998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:31.495 [2024-12-16 21:25:21.017011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.020756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.020807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.495 [2024-12-16 21:25:21.020816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:19:31.495 [2024-12-16 21:25:21.020828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.023919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.023975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.495 [2024-12-16 21:25:21.023985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:19:31.495 [2024-12-16 21:25:21.023994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.495 [2024-12-16 21:25:21.026320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.495 [2024-12-16 21:25:21.026372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.495 [2024-12-16 21:25:21.026381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:19:31.496 [2024-12-16 21:25:21.026390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.496 [2024-12-16 21:25:21.028943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.496 [2024-12-16 21:25:21.029016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.496 [2024-12-16 21:25:21.029026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:19:31.496 [2024-12-16 21:25:21.029036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.496 [2024-12-16 21:25:21.029080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.496 [2024-12-16 21:25:21.029099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.496 [2024-12-16 21:25:21.029676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.029995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.030006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.030014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.497 [2024-12-16 21:25:21.030032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.497 [2024-12-16 21:25:21.030040] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed315229-44e7-4b17-bfcd-321e68a18dd7 00:19:31.497 [2024-12-16 21:25:21.030051] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.497 [2024-12-16 21:25:21.030061] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.497 [2024-12-16 21:25:21.030071] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.497 [2024-12-16 21:25:21.030079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.497 [2024-12-16 21:25:21.030090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.497 [2024-12-16 21:25:21.030102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.497 [2024-12-16 21:25:21.030113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.497 [2024-12-16 21:25:21.030119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.497 [2024-12-16 21:25:21.030128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.497 [2024-12-16 21:25:21.030135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.497 [2024-12-16 21:25:21.030145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.497 [2024-12-16 21:25:21.030154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:19:31.497 [2024-12-16 21:25:21.030172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.497 [2024-12-16 21:25:21.032466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.497 [2024-12-16 21:25:21.032509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.497 [2024-12-16 21:25:21.032525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:19:31.497 [2024-12-16 21:25:21.032537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.497 [2024-12-16 21:25:21.032699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.497 [2024-12-16 21:25:21.032713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.497 [2024-12-16 21:25:21.032723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:31.497 [2024-12-16 21:25:21.032734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.497 [2024-12-16 21:25:21.041138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.497 [2024-12-16 21:25:21.041196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.497 [2024-12-16 21:25:21.041207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.497 [2024-12-16 21:25:21.041217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.497 [2024-12-16 21:25:21.041314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.497 [2024-12-16 21:25:21.041327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.497 [2024-12-16 21:25:21.041336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.497 [2024-12-16 21:25:21.041349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.497 [2024-12-16 21:25:21.041408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.497 [2024-12-16 21:25:21.041421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.497 [2024-12-16 21:25:21.041429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.041439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.041464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.041474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.498 [2024-12-16 21:25:21.041483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.041493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.056543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.056610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.498 [2024-12-16 21:25:21.056621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.056677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.068371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.068433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.498 [2024-12-16 21:25:21.068445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.068458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.068534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.068551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.498 [2024-12-16 21:25:21.068560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.068571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.068608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.068620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.498 [2024-12-16 21:25:21.068710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.068721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.068806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.068822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.498 [2024-12-16 21:25:21.068830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.068840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.068875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.068886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.498 [2024-12-16 21:25:21.068895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.068908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.068955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.068981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.498 [2024-12-16 21:25:21.068994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.069005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.069056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.498 [2024-12-16 21:25:21.069070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.498 [2024-12-16 21:25:21.069080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.498 [2024-12-16 21:25:21.069091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.498 [2024-12-16 21:25:21.069262] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.125 ms, result 0 00:19:31.767 21:25:21 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.767 [2024-12-16 21:25:21.377496] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:31.767 [2024-12-16 21:25:21.377660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89713 ] 00:19:32.026 [2024-12-16 21:25:21.526526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.026 [2024-12-16 21:25:21.554896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.026 [2024-12-16 21:25:21.671613] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.026 [2024-12-16 21:25:21.671730] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.288 [2024-12-16 21:25:21.833394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.288 [2024-12-16 21:25:21.833452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.288 [2024-12-16 21:25:21.833467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:32.288 [2024-12-16 21:25:21.833475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.288 [2024-12-16 21:25:21.836113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.288 [2024-12-16 21:25:21.836168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.289 [2024-12-16 21:25:21.836179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:19:32.289 [2024-12-16 21:25:21.836187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.836297] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.289 [2024-12-16 21:25:21.836566] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.289 [2024-12-16 21:25:21.836595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.836605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.289 [2024-12-16 21:25:21.836617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:19:32.289 [2024-12-16 21:25:21.836651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.838381] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.289 [2024-12-16 21:25:21.842158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.842206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.289 [2024-12-16 21:25:21.842222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:19:32.289 [2024-12-16 21:25:21.842231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.842312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.842328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.289 [2024-12-16 21:25:21.842337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:32.289 [2024-12-16 21:25:21.842344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.850294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.850340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.289 [2024-12-16 21:25:21.850350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.882 ms 00:19:32.289 [2024-12-16 21:25:21.850358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.850493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.850504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.289 [2024-12-16 21:25:21.850514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:32.289 [2024-12-16 21:25:21.850524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.850551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.850560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.289 [2024-12-16 21:25:21.850568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:32.289 [2024-12-16 21:25:21.850579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.850600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:32.289 [2024-12-16 21:25:21.852604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.852675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.289 [2024-12-16 21:25:21.852691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:19:32.289 [2024-12-16 21:25:21.852702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.852747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.852759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.289 [2024-12-16 21:25:21.852767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:32.289 [2024-12-16 21:25:21.852779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.852798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.289 [2024-12-16 21:25:21.852820] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:32.289 [2024-12-16 21:25:21.852864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.289 [2024-12-16 21:25:21.852883] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:32.289 [2024-12-16 21:25:21.853001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:32.289 [2024-12-16 21:25:21.853012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.289 [2024-12-16 21:25:21.853024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:32.289 [2024-12-16 21:25:21.853034] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853043] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853052] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:32.289 [2024-12-16 21:25:21.853059] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.289 [2024-12-16 21:25:21.853067] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:32.289 [2024-12-16 21:25:21.853074] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:32.289 [2024-12-16 21:25:21.853092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.853100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.289 [2024-12-16 21:25:21.853108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:32.289 [2024-12-16 21:25:21.853115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.853203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.289 [2024-12-16 21:25:21.853212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.289 [2024-12-16 21:25:21.853223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:32.289 [2024-12-16 21:25:21.853231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.289 [2024-12-16 21:25:21.853331] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.289 [2024-12-16 21:25:21.853354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.289 [2024-12-16 21:25:21.853364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.289 [2024-12-16 21:25:21.853391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.289 [2024-12-16 21:25:21.853418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.289 [2024-12-16 21:25:21.853435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.289 [2024-12-16 21:25:21.853443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:32.289 [2024-12-16 21:25:21.853451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.289 [2024-12-16 21:25:21.853458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.289 [2024-12-16 21:25:21.853467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:32.289 [2024-12-16 21:25:21.853474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.289 [2024-12-16 21:25:21.853490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.289 [2024-12-16 21:25:21.853517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.289 [2024-12-16 21:25:21.853546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.289 [2024-12-16 21:25:21.853572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.289 [2024-12-16 21:25:21.853596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.289 [2024-12-16 21:25:21.853619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.289 [2024-12-16 21:25:21.853649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.289 [2024-12-16 21:25:21.853657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:32.289 [2024-12-16 21:25:21.853665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.289 [2024-12-16 21:25:21.853672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:32.289 [2024-12-16 21:25:21.853680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:32.289 [2024-12-16 21:25:21.853690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:32.289 [2024-12-16 21:25:21.853705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:32.289 [2024-12-16 21:25:21.853712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853720] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.289 [2024-12-16 21:25:21.853729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.289 [2024-12-16 21:25:21.853741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.289 [2024-12-16 21:25:21.853750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.289 [2024-12-16 21:25:21.853758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.289 [2024-12-16 21:25:21.853767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.290 [2024-12-16 21:25:21.853775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.290 [2024-12-16 21:25:21.853785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.290 [2024-12-16 21:25:21.853792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.290 [2024-12-16 21:25:21.853799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.290 [2024-12-16 21:25:21.853808] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.290 [2024-12-16 21:25:21.853822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.290 [2024-12-16 21:25:21.853835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:32.290 [2024-12-16 21:25:21.853844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:32.290 [2024-12-16 21:25:21.853851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:32.290 [2024-12-16 21:25:21.853858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:32.290 [2024-12-16 21:25:21.853865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:32.290 [2024-12-16 21:25:21.853873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:32.290 [2024-12-16 21:25:21.853881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:32.290 [2024-12-16 21:25:21.853894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:32.290 [2024-12-16 21:25:21.853901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:32.290 [2024-12-16 21:25:21.853908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:32.290 [2024-12-16 21:25:21.853916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:32.290 [2024-12-16 21:25:21.853923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:32.290 [2024-12-16 21:25:21.853931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:32.290 [2024-12-16 21:25:21.853938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:32.290 [2024-12-16 21:25:21.853946] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.290 [2024-12-16 21:25:21.853957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.290 [2024-12-16 21:25:21.853967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.290 [2024-12-16 21:25:21.853974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.290 [2024-12-16 21:25:21.853982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.290 [2024-12-16 21:25:21.853990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.290 [2024-12-16 21:25:21.853998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.854005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.290 [2024-12-16 21:25:21.854014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:19:32.290 [2024-12-16 21:25:21.854021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.867779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.867828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.290 [2024-12-16 21:25:21.867840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.704 ms 00:19:32.290 [2024-12-16 21:25:21.867848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.867981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.867998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.290 [2024-12-16 21:25:21.868007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:32.290 [2024-12-16 21:25:21.868015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.893185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.893241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.290 [2024-12-16 21:25:21.893255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.147 ms 00:19:32.290 [2024-12-16 21:25:21.893264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.893370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.893383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.290 [2024-12-16 21:25:21.893394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:32.290 [2024-12-16 21:25:21.893402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.893968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.894009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.290 [2024-12-16 21:25:21.894022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:19:32.290 [2024-12-16 21:25:21.894031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.894203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.894218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.290 [2024-12-16 21:25:21.894229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:32.290 [2024-12-16 21:25:21.894238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.902742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.902785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.290 [2024-12-16 21:25:21.902796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.477 ms 00:19:32.290 [2024-12-16 21:25:21.902810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.906590] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:32.290 [2024-12-16 21:25:21.906659] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.290 [2024-12-16 21:25:21.906672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.906681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.290 [2024-12-16 21:25:21.906690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.745 ms 00:19:32.290 [2024-12-16 21:25:21.906697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.922825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.922872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.290 [2024-12-16 21:25:21.922883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.063 ms 00:19:32.290 [2024-12-16 21:25:21.922899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.925878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.925930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.290 [2024-12-16 21:25:21.925940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:19:32.290 [2024-12-16 21:25:21.925947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.928669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.928724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.290 [2024-12-16 21:25:21.928734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:19:32.290 [2024-12-16 21:25:21.928741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.929121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.929185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.290 [2024-12-16 21:25:21.929196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:19:32.290 [2024-12-16 21:25:21.929204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.954794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.954848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.290 [2024-12-16 21:25:21.954869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.564 ms 00:19:32.290 [2024-12-16 21:25:21.954878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.963322] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:32.290 [2024-12-16 21:25:21.982790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.982855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.290 [2024-12-16 21:25:21.982869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.824 ms 00:19:32.290 [2024-12-16 21:25:21.982877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.982974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.982986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.290 [2024-12-16 21:25:21.982999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:32.290 [2024-12-16 21:25:21.983007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.983070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.983080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.290 [2024-12-16 21:25:21.983090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:32.290 [2024-12-16 21:25:21.983098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.983125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.983137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.290 [2024-12-16 21:25:21.983146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:32.290 [2024-12-16 21:25:21.983157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.290 [2024-12-16 21:25:21.983200] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.290 [2024-12-16 21:25:21.983210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.290 [2024-12-16 21:25:21.983218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.290 [2024-12-16 21:25:21.983226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:32.291 [2024-12-16 21:25:21.983235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.551 [2024-12-16 21:25:21.989662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.551 [2024-12-16 21:25:21.989718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.551 [2024-12-16 21:25:21.989730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:19:32.551 [2024-12-16 21:25:21.989739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.551 [2024-12-16 21:25:21.989847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.551 [2024-12-16 21:25:21.989857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.551 [2024-12-16 21:25:21.989867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:32.551 [2024-12-16 21:25:21.989876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.551 [2024-12-16 21:25:21.991195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.551 [2024-12-16 21:25:21.992565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.426 ms, result 0 00:19:32.551 [2024-12-16 21:25:21.994011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.551 [2024-12-16 21:25:22.001216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.495  [2024-12-16T21:25:24.139Z] Copying: 13/256 [MB] (13 MBps) [2024-12-16T21:25:25.085Z] Copying: 26/256 [MB] (12 MBps) [2024-12-16T21:25:26.467Z] Copying: 39/256 [MB] (13 MBps) [2024-12-16T21:25:27.408Z] Copying: 58/256 [MB] (19 MBps) [2024-12-16T21:25:28.349Z] Copying: 77/256 [MB] (18 MBps) [2024-12-16T21:25:29.289Z] Copying: 94/256 [MB] (16 MBps) [2024-12-16T21:25:30.228Z] Copying: 112/256 [MB] (18 MBps) [2024-12-16T21:25:31.168Z] Copying: 126/256 [MB] (13 MBps) [2024-12-16T21:25:32.106Z] Copying: 140/256 [MB] (14 MBps) [2024-12-16T21:25:33.481Z] Copying: 161/256 [MB] (20 MBps) [2024-12-16T21:25:34.421Z] Copying: 184/256 [MB] (23 MBps) [2024-12-16T21:25:35.358Z] Copying: 206/256 [MB] (22 MBps) [2024-12-16T21:25:36.295Z] Copying: 226/256 [MB] (20 MBps) [2024-12-16T21:25:36.863Z] Copying: 242/256 [MB] (15 MBps) [2024-12-16T21:25:37.122Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-16 21:25:36.895133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.422 [2024-12-16 21:25:36.896375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.896416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.422 [2024-12-16 21:25:36.896431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:47.422 [2024-12-16 21:25:36.896447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.896473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:47.422 [2024-12-16 21:25:36.897402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.897438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.422 [2024-12-16 21:25:36.897451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:19:47.422 [2024-12-16 21:25:36.897462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.897829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.897857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.422 [2024-12-16 21:25:36.897874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:19:47.422 [2024-12-16 21:25:36.897890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.904116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.904156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.422 [2024-12-16 21:25:36.904169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:19:47.422 [2024-12-16 21:25:36.904180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.911838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.911867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.422 [2024-12-16 21:25:36.911875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:19:47.422 [2024-12-16 21:25:36.911884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.913576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.913610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.422 [2024-12-16 21:25:36.913618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:19:47.422 [2024-12-16 21:25:36.913624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.917395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.917427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.422 [2024-12-16 21:25:36.917434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.732 ms 00:19:47.422 [2024-12-16 21:25:36.917440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.917538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.917546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.422 [2024-12-16 21:25:36.917552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:47.422 [2024-12-16 21:25:36.917560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.919558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.919588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.422 [2024-12-16 21:25:36.919595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:19:47.422 [2024-12-16 21:25:36.919600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.422 [2024-12-16 21:25:36.921362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.422 [2024-12-16 21:25:36.921401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.422 [2024-12-16 21:25:36.921409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:19:47.423 [2024-12-16 21:25:36.921415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.423 [2024-12-16 21:25:36.922809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.423 [2024-12-16 21:25:36.922840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.423 [2024-12-16 21:25:36.922847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:19:47.423 [2024-12-16 21:25:36.922852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.423 [2024-12-16 21:25:36.924088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.423 [2024-12-16 21:25:36.924118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.423 [2024-12-16 21:25:36.924124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:19:47.423 [2024-12-16 21:25:36.924130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.423 [2024-12-16 21:25:36.924156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.423 [2024-12-16 21:25:36.924167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.423 [2024-12-16 21:25:36.924511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.424 [2024-12-16 21:25:36.924759] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.424 [2024-12-16 21:25:36.924765] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed315229-44e7-4b17-bfcd-321e68a18dd7 00:19:47.424 [2024-12-16 21:25:36.924775] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.424 [2024-12-16 21:25:36.924780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.424 [2024-12-16 21:25:36.924789] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.424 [2024-12-16 21:25:36.924795] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.424 [2024-12-16 21:25:36.924800] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.424 [2024-12-16 21:25:36.924806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.424 [2024-12-16 21:25:36.924817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.424 [2024-12-16 21:25:36.924822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.424 [2024-12-16 21:25:36.924827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.424 [2024-12-16 21:25:36.924833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.424 [2024-12-16 21:25:36.924838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.424 [2024-12-16 21:25:36.924845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:19:47.424 [2024-12-16 21:25:36.924850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.926171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.424 [2024-12-16 21:25:36.926193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.424 [2024-12-16 21:25:36.926201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:19:47.424 [2024-12-16 21:25:36.926210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.926282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.424 [2024-12-16 21:25:36.926289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.424 [2024-12-16 21:25:36.926295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:47.424 [2024-12-16 21:25:36.926301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.930851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.930880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.424 [2024-12-16 21:25:36.930886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.424 [2024-12-16 21:25:36.930896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.930942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.930952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.424 [2024-12-16 21:25:36.930961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.424 [2024-12-16 21:25:36.930967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.930998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.931004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.424 [2024-12-16 21:25:36.931013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.424 [2024-12-16 21:25:36.931019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.931034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.931039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.424 [2024-12-16 21:25:36.931045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.424 [2024-12-16 21:25:36.931050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.938929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.938962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.424 [2024-12-16 21:25:36.938970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.424 [2024-12-16 21:25:36.938978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.945290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.945321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.424 [2024-12-16 21:25:36.945329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.424 [2024-12-16 21:25:36.945335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.945361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.945368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.424 [2024-12-16 21:25:36.945374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.424 [2024-12-16 21:25:36.945380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.424 [2024-12-16 21:25:36.945402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.424 [2024-12-16 21:25:36.945411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.425 [2024-12-16 21:25:36.945417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.425 [2024-12-16 21:25:36.945423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.425 [2024-12-16 21:25:36.945475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.425 [2024-12-16 21:25:36.945482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.425 [2024-12-16 21:25:36.945493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.425 [2024-12-16 21:25:36.945499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.425 [2024-12-16 21:25:36.945520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.425 [2024-12-16 21:25:36.945534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.425 [2024-12-16 21:25:36.945540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.425 [2024-12-16 21:25:36.945546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.425 [2024-12-16 21:25:36.945580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.425 [2024-12-16 21:25:36.945587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.425 [2024-12-16 21:25:36.945593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.425 [2024-12-16 21:25:36.945599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.425 [2024-12-16 21:25:36.945644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.425 [2024-12-16 21:25:36.945656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.425 [2024-12-16 21:25:36.945662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.425 [2024-12-16 21:25:36.945667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.425 [2024-12-16 21:25:36.945772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.384 ms, result 0 00:19:47.425 00:19:47.425 00:19:47.425 21:25:37 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:47.991 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:47.991 21:25:37 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:47.991 21:25:37 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:47.991 21:25:37 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:47.991 21:25:37 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:47.991 21:25:37 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:47.991 21:25:37 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:47.991 21:25:37 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89677 00:19:47.991 21:25:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89677 ']' 00:19:47.991 Process with pid 89677 is not found 00:19:47.991 21:25:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89677 00:19:47.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89677) - No such process 00:19:47.991 21:25:37 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89677 is not found' 00:19:47.991 00:19:47.991 real 1m8.404s 00:19:47.991 user 1m32.361s 00:19:47.991 sys 0m5.223s 00:19:47.991 21:25:37 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.991 21:25:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:47.991 ************************************ 00:19:47.991 END TEST ftl_trim 00:19:47.991 ************************************ 00:19:47.991 21:25:37 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:47.991 21:25:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:47.991 21:25:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.991 21:25:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:47.991 ************************************ 00:19:47.991 START TEST ftl_restore 00:19:47.991 ************************************ 00:19:47.991 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:48.252 * Looking for test storage... 00:19:48.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:48.252 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:48.252 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:48.252 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:19:48.252 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:48.252 21:25:37 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.252 21:25:37 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.252 21:25:37 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.252 21:25:37 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.253 21:25:37 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.253 --rc genhtml_branch_coverage=1 00:19:48.253 --rc genhtml_function_coverage=1 00:19:48.253 --rc genhtml_legend=1 00:19:48.253 --rc geninfo_all_blocks=1 00:19:48.253 --rc geninfo_unexecuted_blocks=1 00:19:48.253 00:19:48.253 ' 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.253 --rc genhtml_branch_coverage=1 00:19:48.253 --rc genhtml_function_coverage=1 00:19:48.253 --rc genhtml_legend=1 00:19:48.253 --rc geninfo_all_blocks=1 00:19:48.253 --rc geninfo_unexecuted_blocks=1 00:19:48.253 00:19:48.253 ' 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.253 --rc genhtml_branch_coverage=1 00:19:48.253 --rc genhtml_function_coverage=1 00:19:48.253 --rc genhtml_legend=1 00:19:48.253 --rc geninfo_all_blocks=1 00:19:48.253 --rc geninfo_unexecuted_blocks=1 00:19:48.253 00:19:48.253 ' 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.253 --rc genhtml_branch_coverage=1 00:19:48.253 --rc genhtml_function_coverage=1 00:19:48.253 --rc genhtml_legend=1 00:19:48.253 --rc geninfo_all_blocks=1 00:19:48.253 --rc geninfo_unexecuted_blocks=1 00:19:48.253 00:19:48.253 ' 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HD0GGyuNiA 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=89949 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 89949 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 89949 ']' 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.253 21:25:37 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:48.253 21:25:37 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.253 [2024-12-16 21:25:37.901016] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:48.253 [2024-12-16 21:25:37.901163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89949 ] 00:19:48.515 [2024-12-16 21:25:38.048779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.515 [2024-12-16 21:25:38.077774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.092 21:25:38 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.092 21:25:38 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:49.092 21:25:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:49.092 21:25:38 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:49.092 21:25:38 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:49.092 21:25:38 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:49.092 21:25:38 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:49.092 21:25:38 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:49.353 21:25:39 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:49.353 21:25:39 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:49.614 21:25:39 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:49.614 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:49.615 { 00:19:49.615 "name": "nvme0n1", 00:19:49.615 "aliases": [ 00:19:49.615 "9a60d83b-4fd6-48b0-8b70-d39fcbe38dc9" 00:19:49.615 ], 00:19:49.615 "product_name": "NVMe disk", 00:19:49.615 "block_size": 4096, 00:19:49.615 "num_blocks": 1310720, 00:19:49.615 "uuid": "9a60d83b-4fd6-48b0-8b70-d39fcbe38dc9", 00:19:49.615 "numa_id": -1, 00:19:49.615 "assigned_rate_limits": { 00:19:49.615 "rw_ios_per_sec": 0, 00:19:49.615 "rw_mbytes_per_sec": 0, 00:19:49.615 "r_mbytes_per_sec": 0, 00:19:49.615 "w_mbytes_per_sec": 0 00:19:49.615 }, 00:19:49.615 "claimed": true, 00:19:49.615 "claim_type": "read_many_write_one", 00:19:49.615 "zoned": false, 00:19:49.615 "supported_io_types": { 00:19:49.615 "read": true, 00:19:49.615 "write": true, 00:19:49.615 "unmap": true, 00:19:49.615 "flush": true, 00:19:49.615 "reset": true, 00:19:49.615 "nvme_admin": true, 00:19:49.615 "nvme_io": true, 00:19:49.615 "nvme_io_md": false, 00:19:49.615 "write_zeroes": true, 00:19:49.615 "zcopy": false, 00:19:49.615 "get_zone_info": false, 00:19:49.615 "zone_management": false, 00:19:49.615 "zone_append": false, 00:19:49.615 "compare": true, 00:19:49.615 "compare_and_write": false, 00:19:49.615 "abort": true, 00:19:49.615 "seek_hole": false, 00:19:49.615 "seek_data": false, 00:19:49.615 "copy": true, 00:19:49.615 "nvme_iov_md": false 00:19:49.615 }, 00:19:49.615 "driver_specific": { 00:19:49.615 "nvme": [ 00:19:49.615 { 00:19:49.615 "pci_address": "0000:00:11.0", 00:19:49.615 "trid": { 00:19:49.615 "trtype": "PCIe", 00:19:49.615 "traddr": "0000:00:11.0" 00:19:49.615 }, 00:19:49.615 "ctrlr_data": { 00:19:49.615 "cntlid": 0, 00:19:49.615 "vendor_id": "0x1b36", 00:19:49.615 "model_number": "QEMU NVMe Ctrl", 00:19:49.615 "serial_number": "12341", 00:19:49.615 "firmware_revision": "8.0.0", 00:19:49.615 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:49.615 "oacs": { 00:19:49.615 "security": 0, 00:19:49.615 "format": 1, 00:19:49.615 "firmware": 0, 00:19:49.615 "ns_manage": 1 00:19:49.615 }, 00:19:49.615 "multi_ctrlr": false, 00:19:49.615 "ana_reporting": false 00:19:49.615 }, 00:19:49.615 "vs": { 00:19:49.615 "nvme_version": "1.4" 00:19:49.615 }, 00:19:49.615 "ns_data": { 00:19:49.615 "id": 1, 00:19:49.615 "can_share": false 00:19:49.615 } 00:19:49.615 } 00:19:49.615 ], 00:19:49.615 "mp_policy": "active_passive" 00:19:49.615 } 00:19:49.615 } 00:19:49.615 ]' 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:49.615 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:49.876 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:49.876 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:49.876 21:25:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:49.876 21:25:39 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:49.877 21:25:39 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:49.877 21:25:39 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:49.877 21:25:39 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:49.877 21:25:39 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:49.877 21:25:39 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=9da97f88-1f6b-4134-a1b0-1cbc21e1f200 00:19:49.877 21:25:39 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:49.877 21:25:39 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9da97f88-1f6b-4134-a1b0-1cbc21e1f200 00:19:50.138 21:25:39 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:50.398 21:25:40 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=dd28b129-35b7-42b6-9446-88f33ccfd6f3 00:19:50.398 21:25:40 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dd28b129-35b7-42b6-9446-88f33ccfd6f3 00:19:50.659 21:25:40 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:50.659 21:25:40 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:50.659 21:25:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:50.660 21:25:40 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:50.660 21:25:40 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:50.660 21:25:40 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:50.660 21:25:40 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:50.660 21:25:40 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:50.660 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:50.660 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:50.660 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:50.660 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:50.660 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:50.921 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:50.921 { 00:19:50.921 "name": "142bc28a-9cd6-41ec-bb24-d6ab073f2ece", 00:19:50.921 "aliases": [ 00:19:50.921 "lvs/nvme0n1p0" 00:19:50.921 ], 00:19:50.921 "product_name": "Logical Volume", 00:19:50.921 "block_size": 4096, 00:19:50.921 "num_blocks": 26476544, 00:19:50.921 "uuid": "142bc28a-9cd6-41ec-bb24-d6ab073f2ece", 00:19:50.921 "assigned_rate_limits": { 00:19:50.921 "rw_ios_per_sec": 0, 00:19:50.921 "rw_mbytes_per_sec": 0, 00:19:50.921 "r_mbytes_per_sec": 0, 00:19:50.921 "w_mbytes_per_sec": 0 00:19:50.921 }, 00:19:50.921 "claimed": false, 00:19:50.921 "zoned": false, 00:19:50.921 "supported_io_types": { 00:19:50.921 "read": true, 00:19:50.921 "write": true, 00:19:50.921 "unmap": true, 00:19:50.921 "flush": false, 00:19:50.921 "reset": true, 00:19:50.921 "nvme_admin": false, 00:19:50.921 "nvme_io": false, 00:19:50.921 "nvme_io_md": false, 00:19:50.921 "write_zeroes": true, 00:19:50.921 "zcopy": false, 00:19:50.921 "get_zone_info": false, 00:19:50.921 "zone_management": false, 00:19:50.921 "zone_append": false, 00:19:50.921 "compare": false, 00:19:50.921 "compare_and_write": false, 00:19:50.921 "abort": false, 00:19:50.921 "seek_hole": true, 00:19:50.921 "seek_data": true, 00:19:50.921 "copy": false, 00:19:50.921 "nvme_iov_md": false 00:19:50.921 }, 00:19:50.921 "driver_specific": { 00:19:50.921 "lvol": { 00:19:50.921 "lvol_store_uuid": "dd28b129-35b7-42b6-9446-88f33ccfd6f3", 00:19:50.921 "base_bdev": "nvme0n1", 00:19:50.921 "thin_provision": true, 00:19:50.921 "num_allocated_clusters": 0, 00:19:50.921 "snapshot": false, 00:19:50.921 "clone": false, 00:19:50.921 "esnap_clone": false 00:19:50.921 } 00:19:50.921 } 00:19:50.921 } 00:19:50.921 ]' 00:19:50.921 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:50.921 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:50.921 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:50.921 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:50.921 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:50.921 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:50.921 21:25:40 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:50.921 21:25:40 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:50.922 21:25:40 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:51.181 21:25:40 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:51.181 21:25:40 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:51.181 21:25:40 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:51.181 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:51.181 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:51.181 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:51.181 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:51.181 21:25:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:51.440 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:51.440 { 00:19:51.440 "name": "142bc28a-9cd6-41ec-bb24-d6ab073f2ece", 00:19:51.440 "aliases": [ 00:19:51.440 "lvs/nvme0n1p0" 00:19:51.440 ], 00:19:51.440 "product_name": "Logical Volume", 00:19:51.440 "block_size": 4096, 00:19:51.440 "num_blocks": 26476544, 00:19:51.440 "uuid": "142bc28a-9cd6-41ec-bb24-d6ab073f2ece", 00:19:51.440 "assigned_rate_limits": { 00:19:51.440 "rw_ios_per_sec": 0, 00:19:51.440 "rw_mbytes_per_sec": 0, 00:19:51.440 "r_mbytes_per_sec": 0, 00:19:51.440 "w_mbytes_per_sec": 0 00:19:51.440 }, 00:19:51.440 "claimed": false, 00:19:51.440 "zoned": false, 00:19:51.440 "supported_io_types": { 00:19:51.440 "read": true, 00:19:51.440 "write": true, 00:19:51.440 "unmap": true, 00:19:51.440 "flush": false, 00:19:51.440 "reset": true, 00:19:51.440 "nvme_admin": false, 00:19:51.440 "nvme_io": false, 00:19:51.440 "nvme_io_md": false, 00:19:51.440 "write_zeroes": true, 00:19:51.440 "zcopy": false, 00:19:51.440 "get_zone_info": false, 00:19:51.440 "zone_management": false, 00:19:51.440 "zone_append": false, 00:19:51.440 "compare": false, 00:19:51.440 "compare_and_write": false, 00:19:51.440 "abort": false, 00:19:51.440 "seek_hole": true, 00:19:51.440 "seek_data": true, 00:19:51.440 "copy": false, 00:19:51.440 "nvme_iov_md": false 00:19:51.440 }, 00:19:51.440 "driver_specific": { 00:19:51.440 "lvol": { 00:19:51.440 "lvol_store_uuid": "dd28b129-35b7-42b6-9446-88f33ccfd6f3", 00:19:51.440 "base_bdev": "nvme0n1", 00:19:51.440 "thin_provision": true, 00:19:51.440 "num_allocated_clusters": 0, 00:19:51.440 "snapshot": false, 00:19:51.440 "clone": false, 00:19:51.440 "esnap_clone": false 00:19:51.440 } 00:19:51.440 } 00:19:51.440 } 00:19:51.440 ]' 00:19:51.440 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:51.440 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:51.440 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:51.440 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:51.440 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:51.440 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:51.440 21:25:41 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:51.440 21:25:41 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:51.699 21:25:41 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:51.699 21:25:41 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:51.699 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:51.699 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:51.699 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:51.699 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:51.699 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 142bc28a-9cd6-41ec-bb24-d6ab073f2ece 00:19:51.957 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:51.957 { 00:19:51.957 "name": "142bc28a-9cd6-41ec-bb24-d6ab073f2ece", 00:19:51.957 "aliases": [ 00:19:51.957 "lvs/nvme0n1p0" 00:19:51.957 ], 00:19:51.957 "product_name": "Logical Volume", 00:19:51.957 "block_size": 4096, 00:19:51.957 "num_blocks": 26476544, 00:19:51.957 "uuid": "142bc28a-9cd6-41ec-bb24-d6ab073f2ece", 00:19:51.957 "assigned_rate_limits": { 00:19:51.957 "rw_ios_per_sec": 0, 00:19:51.957 "rw_mbytes_per_sec": 0, 00:19:51.957 "r_mbytes_per_sec": 0, 00:19:51.957 "w_mbytes_per_sec": 0 00:19:51.957 }, 00:19:51.957 "claimed": false, 00:19:51.957 "zoned": false, 00:19:51.957 "supported_io_types": { 00:19:51.957 "read": true, 00:19:51.957 "write": true, 00:19:51.957 "unmap": true, 00:19:51.957 "flush": false, 00:19:51.957 "reset": true, 00:19:51.957 "nvme_admin": false, 00:19:51.957 "nvme_io": false, 00:19:51.957 "nvme_io_md": false, 00:19:51.957 "write_zeroes": true, 00:19:51.957 "zcopy": false, 00:19:51.957 "get_zone_info": false, 00:19:51.957 "zone_management": false, 00:19:51.957 "zone_append": false, 00:19:51.957 "compare": false, 00:19:51.957 "compare_and_write": false, 00:19:51.957 "abort": false, 00:19:51.957 "seek_hole": true, 00:19:51.957 "seek_data": true, 00:19:51.957 "copy": false, 00:19:51.957 "nvme_iov_md": false 00:19:51.957 }, 00:19:51.957 "driver_specific": { 00:19:51.957 "lvol": { 00:19:51.957 "lvol_store_uuid": "dd28b129-35b7-42b6-9446-88f33ccfd6f3", 00:19:51.957 "base_bdev": "nvme0n1", 00:19:51.957 "thin_provision": true, 00:19:51.957 "num_allocated_clusters": 0, 00:19:51.957 "snapshot": false, 00:19:51.957 "clone": false, 00:19:51.957 "esnap_clone": false 00:19:51.957 } 00:19:51.957 } 00:19:51.957 } 00:19:51.957 ]' 00:19:51.957 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:51.957 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:51.957 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:51.957 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:51.957 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:51.957 21:25:41 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:51.957 21:25:41 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:51.957 21:25:41 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 142bc28a-9cd6-41ec-bb24-d6ab073f2ece --l2p_dram_limit 10' 00:19:51.957 21:25:41 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:51.957 21:25:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:51.957 21:25:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:51.957 21:25:41 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:51.957 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:51.958 21:25:41 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 142bc28a-9cd6-41ec-bb24-d6ab073f2ece --l2p_dram_limit 10 -c nvc0n1p0 00:19:52.219 [2024-12-16 21:25:41.730602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.730651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.219 [2024-12-16 21:25:41.730662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.219 [2024-12-16 21:25:41.730669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.730711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.730720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.219 [2024-12-16 21:25:41.730728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:52.219 [2024-12-16 21:25:41.730737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.730752] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.219 [2024-12-16 21:25:41.731047] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.219 [2024-12-16 21:25:41.731074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.731081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.219 [2024-12-16 21:25:41.731087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:52.219 [2024-12-16 21:25:41.731095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.731120] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 59037ec0-dba9-4cf7-9db3-267d4c6f8c8f 00:19:52.219 [2024-12-16 21:25:41.732075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.732098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:52.219 [2024-12-16 21:25:41.732109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:52.219 [2024-12-16 21:25:41.732115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.736738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.736764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.219 [2024-12-16 21:25:41.736773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.567 ms 00:19:52.219 [2024-12-16 21:25:41.736779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.736840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.736847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.219 [2024-12-16 21:25:41.736854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:52.219 [2024-12-16 21:25:41.736862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.736902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.736912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.219 [2024-12-16 21:25:41.736920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:52.219 [2024-12-16 21:25:41.736925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.736943] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.219 [2024-12-16 21:25:41.738182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.738210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.219 [2024-12-16 21:25:41.738217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:19:52.219 [2024-12-16 21:25:41.738225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.738252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.738260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.219 [2024-12-16 21:25:41.738266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:52.219 [2024-12-16 21:25:41.738275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.738287] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:52.219 [2024-12-16 21:25:41.738400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.219 [2024-12-16 21:25:41.738409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.219 [2024-12-16 21:25:41.738418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.219 [2024-12-16 21:25:41.738425] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.219 [2024-12-16 21:25:41.738438] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.219 [2024-12-16 21:25:41.738444] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:52.219 [2024-12-16 21:25:41.738452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.219 [2024-12-16 21:25:41.738457] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.219 [2024-12-16 21:25:41.738464] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.219 [2024-12-16 21:25:41.738470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.738477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.219 [2024-12-16 21:25:41.738483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:19:52.219 [2024-12-16 21:25:41.738490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.738554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.219 [2024-12-16 21:25:41.738567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.219 [2024-12-16 21:25:41.738572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:52.219 [2024-12-16 21:25:41.738580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.219 [2024-12-16 21:25:41.738671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.219 [2024-12-16 21:25:41.738682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.219 [2024-12-16 21:25:41.738688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.219 [2024-12-16 21:25:41.738696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.219 [2024-12-16 21:25:41.738701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.219 [2024-12-16 21:25:41.738708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.219 [2024-12-16 21:25:41.738713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:52.219 [2024-12-16 21:25:41.738720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.219 [2024-12-16 21:25:41.738725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.220 [2024-12-16 21:25:41.738736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.220 [2024-12-16 21:25:41.738742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:52.220 [2024-12-16 21:25:41.738747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.220 [2024-12-16 21:25:41.738754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.220 [2024-12-16 21:25:41.738759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:52.220 [2024-12-16 21:25:41.738767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.220 [2024-12-16 21:25:41.738779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:52.220 [2024-12-16 21:25:41.738783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.220 [2024-12-16 21:25:41.738795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.220 [2024-12-16 21:25:41.738805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.220 [2024-12-16 21:25:41.738812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.220 [2024-12-16 21:25:41.738823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.220 [2024-12-16 21:25:41.738828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.220 [2024-12-16 21:25:41.738841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.220 [2024-12-16 21:25:41.738850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.220 [2024-12-16 21:25:41.738863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.220 [2024-12-16 21:25:41.738868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.220 [2024-12-16 21:25:41.738880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.220 [2024-12-16 21:25:41.738887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:52.220 [2024-12-16 21:25:41.738893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.220 [2024-12-16 21:25:41.738900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.220 [2024-12-16 21:25:41.738906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:52.220 [2024-12-16 21:25:41.738912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.220 [2024-12-16 21:25:41.738925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:52.220 [2024-12-16 21:25:41.738931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738937] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.220 [2024-12-16 21:25:41.738950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.220 [2024-12-16 21:25:41.738960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.220 [2024-12-16 21:25:41.738966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.220 [2024-12-16 21:25:41.738974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.220 [2024-12-16 21:25:41.738980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.220 [2024-12-16 21:25:41.738987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.220 [2024-12-16 21:25:41.738993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.220 [2024-12-16 21:25:41.739001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.220 [2024-12-16 21:25:41.739007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.220 [2024-12-16 21:25:41.739015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.220 [2024-12-16 21:25:41.739024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.220 [2024-12-16 21:25:41.739033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:52.220 [2024-12-16 21:25:41.739039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:52.220 [2024-12-16 21:25:41.739047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:52.220 [2024-12-16 21:25:41.739053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:52.220 [2024-12-16 21:25:41.739060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:52.220 [2024-12-16 21:25:41.739066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:52.220 [2024-12-16 21:25:41.739075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:52.220 [2024-12-16 21:25:41.739081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:52.220 [2024-12-16 21:25:41.739088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:52.220 [2024-12-16 21:25:41.739094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:52.220 [2024-12-16 21:25:41.739102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:52.220 [2024-12-16 21:25:41.739108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:52.220 [2024-12-16 21:25:41.739115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:52.220 [2024-12-16 21:25:41.739121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:52.220 [2024-12-16 21:25:41.739128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.220 [2024-12-16 21:25:41.739135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.220 [2024-12-16 21:25:41.739143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.220 [2024-12-16 21:25:41.739149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.220 [2024-12-16 21:25:41.739157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.220 [2024-12-16 21:25:41.739163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.220 [2024-12-16 21:25:41.739170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.220 [2024-12-16 21:25:41.739176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.220 [2024-12-16 21:25:41.739187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:52.220 [2024-12-16 21:25:41.739193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.220 [2024-12-16 21:25:41.739230] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:52.220 [2024-12-16 21:25:41.739243] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:56.512 [2024-12-16 21:25:45.435110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.435208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:56.512 [2024-12-16 21:25:45.435232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3695.851 ms 00:19:56.512 [2024-12-16 21:25:45.435243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.455010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.455074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.512 [2024-12-16 21:25:45.455092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.623 ms 00:19:56.512 [2024-12-16 21:25:45.455102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.455259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.455274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:56.512 [2024-12-16 21:25:45.455287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:56.512 [2024-12-16 21:25:45.455305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.472902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.472963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.512 [2024-12-16 21:25:45.472979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.551 ms 00:19:56.512 [2024-12-16 21:25:45.473016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.473059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.473069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.512 [2024-12-16 21:25:45.473082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.512 [2024-12-16 21:25:45.473091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.473858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.473903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.512 [2024-12-16 21:25:45.473917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:19:56.512 [2024-12-16 21:25:45.473927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.474069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.474081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.512 [2024-12-16 21:25:45.474097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:56.512 [2024-12-16 21:25:45.474106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.486069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.486114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.512 [2024-12-16 21:25:45.486130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.936 ms 00:19:56.512 [2024-12-16 21:25:45.486139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.508282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:56.512 [2024-12-16 21:25:45.513514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.513570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:56.512 [2024-12-16 21:25:45.513585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.285 ms 00:19:56.512 [2024-12-16 21:25:45.513596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.613068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.613139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:56.512 [2024-12-16 21:25:45.613157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.404 ms 00:19:56.512 [2024-12-16 21:25:45.613180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.613419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.613438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:56.512 [2024-12-16 21:25:45.613448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:19:56.512 [2024-12-16 21:25:45.613463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.620113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.620174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:56.512 [2024-12-16 21:25:45.620190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.628 ms 00:19:56.512 [2024-12-16 21:25:45.620203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.625573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.625660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:56.512 [2024-12-16 21:25:45.625673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.318 ms 00:19:56.512 [2024-12-16 21:25:45.625684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.626060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.626088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:56.512 [2024-12-16 21:25:45.626099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:19:56.512 [2024-12-16 21:25:45.626117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.675958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.676023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:56.512 [2024-12-16 21:25:45.676042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.795 ms 00:19:56.512 [2024-12-16 21:25:45.676055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.684167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.684225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:56.512 [2024-12-16 21:25:45.684239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.053 ms 00:19:56.512 [2024-12-16 21:25:45.684252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.690257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.690311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:56.512 [2024-12-16 21:25:45.690322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 00:19:56.512 [2024-12-16 21:25:45.690333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.696855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.696911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:56.512 [2024-12-16 21:25:45.696923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.494 ms 00:19:56.512 [2024-12-16 21:25:45.696939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.696973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.697001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:56.512 [2024-12-16 21:25:45.697011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:56.512 [2024-12-16 21:25:45.697023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.697144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:45.697168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:56.512 [2024-12-16 21:25:45.697185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:56.512 [2024-12-16 21:25:45.697200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:45.698604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3967.385 ms, result 0 00:19:56.512 { 00:19:56.512 "name": "ftl0", 00:19:56.512 "uuid": "59037ec0-dba9-4cf7-9db3-267d4c6f8c8f" 00:19:56.512 } 00:19:56.512 21:25:45 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:56.512 21:25:45 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:56.512 21:25:45 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:56.512 21:25:45 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:56.512 [2024-12-16 21:25:46.145772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:46.145826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.512 [2024-12-16 21:25:46.145849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.512 [2024-12-16 21:25:46.145859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:46.145894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.512 [2024-12-16 21:25:46.146903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:46.146957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.512 [2024-12-16 21:25:46.146970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:19:56.512 [2024-12-16 21:25:46.146982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.512 [2024-12-16 21:25:46.147274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.512 [2024-12-16 21:25:46.147299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.512 [2024-12-16 21:25:46.147314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:56.512 [2024-12-16 21:25:46.147331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.150597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.150636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.513 [2024-12-16 21:25:46.150646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:19:56.513 [2024-12-16 21:25:46.150657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.156786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.156831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:56.513 [2024-12-16 21:25:46.156844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.111 ms 00:19:56.513 [2024-12-16 21:25:46.156864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.159868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.159930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.513 [2024-12-16 21:25:46.159941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:19:56.513 [2024-12-16 21:25:46.159951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.167494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.167554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.513 [2024-12-16 21:25:46.167567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.492 ms 00:19:56.513 [2024-12-16 21:25:46.167578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.167761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.167780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.513 [2024-12-16 21:25:46.167794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:56.513 [2024-12-16 21:25:46.167805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.171080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.171137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.513 [2024-12-16 21:25:46.171148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:19:56.513 [2024-12-16 21:25:46.171158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.174253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.174313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.513 [2024-12-16 21:25:46.174323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:19:56.513 [2024-12-16 21:25:46.174334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.176800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.176858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.513 [2024-12-16 21:25:46.176869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:19:56.513 [2024-12-16 21:25:46.176879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.179258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.513 [2024-12-16 21:25:46.179314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.513 [2024-12-16 21:25:46.179325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.306 ms 00:19:56.513 [2024-12-16 21:25:46.179335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.513 [2024-12-16 21:25:46.179379] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.513 [2024-12-16 21:25:46.179400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.179994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.180007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.513 [2024-12-16 21:25:46.180016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.514 [2024-12-16 21:25:46.180425] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.514 [2024-12-16 21:25:46.180435] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59037ec0-dba9-4cf7-9db3-267d4c6f8c8f 00:19:56.514 [2024-12-16 21:25:46.180445] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.514 [2024-12-16 21:25:46.180452] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.514 [2024-12-16 21:25:46.180464] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.514 [2024-12-16 21:25:46.180473] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.514 [2024-12-16 21:25:46.180483] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.514 [2024-12-16 21:25:46.180495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.514 [2024-12-16 21:25:46.180506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.514 [2024-12-16 21:25:46.180514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.514 [2024-12-16 21:25:46.180524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.514 [2024-12-16 21:25:46.180535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.514 [2024-12-16 21:25:46.180551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.514 [2024-12-16 21:25:46.180562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:19:56.514 [2024-12-16 21:25:46.180573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.514 [2024-12-16 21:25:46.183757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.514 [2024-12-16 21:25:46.183805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.514 [2024-12-16 21:25:46.183816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:19:56.514 [2024-12-16 21:25:46.183832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.514 [2024-12-16 21:25:46.183993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.514 [2024-12-16 21:25:46.184007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.514 [2024-12-16 21:25:46.184017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:56.514 [2024-12-16 21:25:46.184028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.514 [2024-12-16 21:25:46.194985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.514 [2024-12-16 21:25:46.195044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.514 [2024-12-16 21:25:46.195060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.514 [2024-12-16 21:25:46.195072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.514 [2024-12-16 21:25:46.195139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.514 [2024-12-16 21:25:46.195152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.514 [2024-12-16 21:25:46.195161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.514 [2024-12-16 21:25:46.195173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.514 [2024-12-16 21:25:46.195256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.514 [2024-12-16 21:25:46.195276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.514 [2024-12-16 21:25:46.195286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.514 [2024-12-16 21:25:46.195304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.514 [2024-12-16 21:25:46.195323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.514 [2024-12-16 21:25:46.195335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.514 [2024-12-16 21:25:46.195344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.514 [2024-12-16 21:25:46.195354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.215798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.215871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.776 [2024-12-16 21:25:46.215892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.215908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.232400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.232479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.776 [2024-12-16 21:25:46.232492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.232504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.232665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.232688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.776 [2024-12-16 21:25:46.232698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.232709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.232769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.232785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.776 [2024-12-16 21:25:46.232794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.232806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.232901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.232919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.776 [2024-12-16 21:25:46.232931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.232944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.233001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.233028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:56.776 [2024-12-16 21:25:46.233038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.233052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.233108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.233144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.776 [2024-12-16 21:25:46.233155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.233167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.233231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.776 [2024-12-16 21:25:46.233256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.776 [2024-12-16 21:25:46.233268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.776 [2024-12-16 21:25:46.233281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.776 [2024-12-16 21:25:46.233461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.634 ms, result 0 00:19:56.776 true 00:19:56.776 21:25:46 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 89949 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89949 ']' 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89949 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89949 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.776 killing process with pid 89949 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89949' 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 89949 00:19:56.776 21:25:46 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 89949 00:20:04.911 21:25:54 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:09.167 262144+0 records in 00:20:09.167 262144+0 records out 00:20:09.167 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.8047 s, 282 MB/s 00:20:09.167 21:25:58 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:11.080 21:26:00 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:11.080 [2024-12-16 21:26:00.483752] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:20:11.080 [2024-12-16 21:26:00.483871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90170 ] 00:20:11.080 [2024-12-16 21:26:00.630519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.080 [2024-12-16 21:26:00.654856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.080 [2024-12-16 21:26:00.768282] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.080 [2024-12-16 21:26:00.768368] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.343 [2024-12-16 21:26:00.929356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.929399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.343 [2024-12-16 21:26:00.929415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.343 [2024-12-16 21:26:00.929423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.929464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.929476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.343 [2024-12-16 21:26:00.929487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:11.343 [2024-12-16 21:26:00.929500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.929523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.343 [2024-12-16 21:26:00.930205] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.343 [2024-12-16 21:26:00.930244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.930261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.343 [2024-12-16 21:26:00.930273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:20:11.343 [2024-12-16 21:26:00.930281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.931354] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:11.343 [2024-12-16 21:26:00.934258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.934298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:11.343 [2024-12-16 21:26:00.934308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.906 ms 00:20:11.343 [2024-12-16 21:26:00.934321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.934372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.934382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:11.343 [2024-12-16 21:26:00.934395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:11.343 [2024-12-16 21:26:00.934401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.939464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.939500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.343 [2024-12-16 21:26:00.939513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.017 ms 00:20:11.343 [2024-12-16 21:26:00.939520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.939600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.939608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.343 [2024-12-16 21:26:00.939616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:11.343 [2024-12-16 21:26:00.939643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.939683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.939693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.343 [2024-12-16 21:26:00.939701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:11.343 [2024-12-16 21:26:00.939713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.939734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.343 [2024-12-16 21:26:00.941146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.941173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.343 [2024-12-16 21:26:00.941182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:20:11.343 [2024-12-16 21:26:00.941188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.941218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.941226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.343 [2024-12-16 21:26:00.941234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:11.343 [2024-12-16 21:26:00.941243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.941261] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:11.343 [2024-12-16 21:26:00.941282] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:11.343 [2024-12-16 21:26:00.941320] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:11.343 [2024-12-16 21:26:00.941335] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:11.343 [2024-12-16 21:26:00.941448] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.343 [2024-12-16 21:26:00.941462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.343 [2024-12-16 21:26:00.941475] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:11.343 [2024-12-16 21:26:00.941485] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.343 [2024-12-16 21:26:00.941496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.343 [2024-12-16 21:26:00.941504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:11.343 [2024-12-16 21:26:00.941515] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.343 [2024-12-16 21:26:00.941522] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.343 [2024-12-16 21:26:00.941529] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.343 [2024-12-16 21:26:00.941536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.941543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.343 [2024-12-16 21:26:00.941553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:20:11.343 [2024-12-16 21:26:00.941560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.941658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.343 [2024-12-16 21:26:00.941670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.343 [2024-12-16 21:26:00.941677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:11.343 [2024-12-16 21:26:00.941684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.343 [2024-12-16 21:26:00.941790] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.343 [2024-12-16 21:26:00.941802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.343 [2024-12-16 21:26:00.941816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.343 [2024-12-16 21:26:00.941825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.343 [2024-12-16 21:26:00.941838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.343 [2024-12-16 21:26:00.941851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.343 [2024-12-16 21:26:00.941859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:11.344 [2024-12-16 21:26:00.941867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.344 [2024-12-16 21:26:00.941878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:11.344 [2024-12-16 21:26:00.941886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.344 [2024-12-16 21:26:00.941897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.344 [2024-12-16 21:26:00.941905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:11.344 [2024-12-16 21:26:00.941918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.344 [2024-12-16 21:26:00.941926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.344 [2024-12-16 21:26:00.941938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:11.344 [2024-12-16 21:26:00.941945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.344 [2024-12-16 21:26:00.941956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.344 [2024-12-16 21:26:00.941964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:11.344 [2024-12-16 21:26:00.941976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.344 [2024-12-16 21:26:00.941984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.344 [2024-12-16 21:26:00.941995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.344 [2024-12-16 21:26:00.942010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.344 [2024-12-16 21:26:00.942018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.344 [2024-12-16 21:26:00.942033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.344 [2024-12-16 21:26:00.942040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.344 [2024-12-16 21:26:00.942058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.344 [2024-12-16 21:26:00.942066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.344 [2024-12-16 21:26:00.942081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.344 [2024-12-16 21:26:00.942088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.344 [2024-12-16 21:26:00.942102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.344 [2024-12-16 21:26:00.942110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:11.344 [2024-12-16 21:26:00.942117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.344 [2024-12-16 21:26:00.942124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.344 [2024-12-16 21:26:00.942132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:11.344 [2024-12-16 21:26:00.942139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.344 [2024-12-16 21:26:00.942154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:11.344 [2024-12-16 21:26:00.942161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942168] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.344 [2024-12-16 21:26:00.942179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.344 [2024-12-16 21:26:00.942187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.344 [2024-12-16 21:26:00.942197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.344 [2024-12-16 21:26:00.942206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.344 [2024-12-16 21:26:00.942213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.344 [2024-12-16 21:26:00.942221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.344 [2024-12-16 21:26:00.942229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.344 [2024-12-16 21:26:00.942236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.344 [2024-12-16 21:26:00.942244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.344 [2024-12-16 21:26:00.942253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.344 [2024-12-16 21:26:00.942262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.344 [2024-12-16 21:26:00.942272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:11.344 [2024-12-16 21:26:00.942280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:11.344 [2024-12-16 21:26:00.942288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:11.344 [2024-12-16 21:26:00.942296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:11.344 [2024-12-16 21:26:00.942305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:11.344 [2024-12-16 21:26:00.942314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:11.344 [2024-12-16 21:26:00.942323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:11.344 [2024-12-16 21:26:00.942331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:11.344 [2024-12-16 21:26:00.942338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:11.344 [2024-12-16 21:26:00.942349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:11.344 [2024-12-16 21:26:00.942356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:11.344 [2024-12-16 21:26:00.942363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:11.344 [2024-12-16 21:26:00.942370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:11.344 [2024-12-16 21:26:00.942377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:11.344 [2024-12-16 21:26:00.942383] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.344 [2024-12-16 21:26:00.942391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.344 [2024-12-16 21:26:00.942399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.344 [2024-12-16 21:26:00.942406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.344 [2024-12-16 21:26:00.942413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.344 [2024-12-16 21:26:00.942420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.344 [2024-12-16 21:26:00.942426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.942435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.344 [2024-12-16 21:26:00.942442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:20:11.344 [2024-12-16 21:26:00.942452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.951343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.951375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.344 [2024-12-16 21:26:00.951392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.852 ms 00:20:11.344 [2024-12-16 21:26:00.951400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.951479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.951488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:11.344 [2024-12-16 21:26:00.951498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:11.344 [2024-12-16 21:26:00.951505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.967598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.967653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.344 [2024-12-16 21:26:00.967666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.045 ms 00:20:11.344 [2024-12-16 21:26:00.967675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.967717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.967727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.344 [2024-12-16 21:26:00.967737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:11.344 [2024-12-16 21:26:00.967745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.968124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.968160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.344 [2024-12-16 21:26:00.968172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:20:11.344 [2024-12-16 21:26:00.968179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.968320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.968331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.344 [2024-12-16 21:26:00.968341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:11.344 [2024-12-16 21:26:00.968351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.973734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.344 [2024-12-16 21:26:00.973763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.344 [2024-12-16 21:26:00.973772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:20:11.344 [2024-12-16 21:26:00.973779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.344 [2024-12-16 21:26:00.976522] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:11.344 [2024-12-16 21:26:00.976556] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:11.344 [2024-12-16 21:26:00.976569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:00.976577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:11.345 [2024-12-16 21:26:00.976585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.708 ms 00:20:11.345 [2024-12-16 21:26:00.976591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:00.991222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:00.991261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:11.345 [2024-12-16 21:26:00.991271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.567 ms 00:20:11.345 [2024-12-16 21:26:00.991279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:00.993208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:00.993238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:11.345 [2024-12-16 21:26:00.993247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:20:11.345 [2024-12-16 21:26:00.993254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:00.995119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:00.995149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:11.345 [2024-12-16 21:26:00.995158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:20:11.345 [2024-12-16 21:26:00.995164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:00.995467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:00.995492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:11.345 [2024-12-16 21:26:00.995501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:20:11.345 [2024-12-16 21:26:00.995508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.013239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.013282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:11.345 [2024-12-16 21:26:01.013292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.716 ms 00:20:11.345 [2024-12-16 21:26:01.013303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.020745] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:11.345 [2024-12-16 21:26:01.023229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.023263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:11.345 [2024-12-16 21:26:01.023277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.890 ms 00:20:11.345 [2024-12-16 21:26:01.023284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.023335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.023345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:11.345 [2024-12-16 21:26:01.023354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:11.345 [2024-12-16 21:26:01.023366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.023446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.023457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:11.345 [2024-12-16 21:26:01.023468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:11.345 [2024-12-16 21:26:01.023477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.023494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.023505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:11.345 [2024-12-16 21:26:01.023513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.345 [2024-12-16 21:26:01.023519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.023550] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:11.345 [2024-12-16 21:26:01.023560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.023570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:11.345 [2024-12-16 21:26:01.023578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:11.345 [2024-12-16 21:26:01.023588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.027294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.027327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:11.345 [2024-12-16 21:26:01.027337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:20:11.345 [2024-12-16 21:26:01.027344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.027405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.345 [2024-12-16 21:26:01.027421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:11.345 [2024-12-16 21:26:01.027429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:11.345 [2024-12-16 21:26:01.027436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.345 [2024-12-16 21:26:01.028336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.590 ms, result 0 00:20:12.731  [2024-12-16T21:26:03.372Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-16T21:26:04.314Z] Copying: 34/1024 [MB] (18 MBps) [2024-12-16T21:26:05.258Z] Copying: 53/1024 [MB] (18 MBps) [2024-12-16T21:26:06.199Z] Copying: 65/1024 [MB] (12 MBps) [2024-12-16T21:26:07.139Z] Copying: 76/1024 [MB] (11 MBps) [2024-12-16T21:26:08.078Z] Copying: 86/1024 [MB] (10 MBps) [2024-12-16T21:26:09.465Z] Copying: 100/1024 [MB] (14 MBps) [2024-12-16T21:26:10.409Z] Copying: 110/1024 [MB] (10 MBps) [2024-12-16T21:26:11.354Z] Copying: 125/1024 [MB] (14 MBps) [2024-12-16T21:26:12.295Z] Copying: 139/1024 [MB] (13 MBps) [2024-12-16T21:26:13.233Z] Copying: 155/1024 [MB] (16 MBps) [2024-12-16T21:26:14.174Z] Copying: 171/1024 [MB] (15 MBps) [2024-12-16T21:26:15.190Z] Copying: 185864/1048576 [kB] (10216 kBps) [2024-12-16T21:26:16.131Z] Copying: 195/1024 [MB] (13 MBps) [2024-12-16T21:26:17.076Z] Copying: 218/1024 [MB] (23 MBps) [2024-12-16T21:26:18.461Z] Copying: 229/1024 [MB] (10 MBps) [2024-12-16T21:26:19.405Z] Copying: 244/1024 [MB] (14 MBps) [2024-12-16T21:26:20.346Z] Copying: 260/1024 [MB] (16 MBps) [2024-12-16T21:26:21.291Z] Copying: 275/1024 [MB] (14 MBps) [2024-12-16T21:26:22.236Z] Copying: 291/1024 [MB] (16 MBps) [2024-12-16T21:26:23.180Z] Copying: 305/1024 [MB] (13 MBps) [2024-12-16T21:26:24.118Z] Copying: 317/1024 [MB] (11 MBps) [2024-12-16T21:26:25.051Z] Copying: 334/1024 [MB] (17 MBps) [2024-12-16T21:26:26.427Z] Copying: 353/1024 [MB] (18 MBps) [2024-12-16T21:26:27.360Z] Copying: 368/1024 [MB] (15 MBps) [2024-12-16T21:26:28.294Z] Copying: 384/1024 [MB] (15 MBps) [2024-12-16T21:26:29.229Z] Copying: 405/1024 [MB] (21 MBps) [2024-12-16T21:26:30.163Z] Copying: 422/1024 [MB] (16 MBps) [2024-12-16T21:26:31.095Z] Copying: 438/1024 [MB] (16 MBps) [2024-12-16T21:26:32.473Z] Copying: 455/1024 [MB] (16 MBps) [2024-12-16T21:26:33.047Z] Copying: 483/1024 [MB] (27 MBps) [2024-12-16T21:26:34.430Z] Copying: 498/1024 [MB] (15 MBps) [2024-12-16T21:26:35.367Z] Copying: 522/1024 [MB] (24 MBps) [2024-12-16T21:26:36.312Z] Copying: 534/1024 [MB] (12 MBps) [2024-12-16T21:26:37.257Z] Copying: 549/1024 [MB] (14 MBps) [2024-12-16T21:26:38.202Z] Copying: 559/1024 [MB] (10 MBps) [2024-12-16T21:26:39.145Z] Copying: 569/1024 [MB] (10 MBps) [2024-12-16T21:26:40.087Z] Copying: 580/1024 [MB] (11 MBps) [2024-12-16T21:26:41.467Z] Copying: 591/1024 [MB] (10 MBps) [2024-12-16T21:26:42.407Z] Copying: 616/1024 [MB] (24 MBps) [2024-12-16T21:26:43.347Z] Copying: 629/1024 [MB] (13 MBps) [2024-12-16T21:26:44.282Z] Copying: 639/1024 [MB] (10 MBps) [2024-12-16T21:26:45.217Z] Copying: 656/1024 [MB] (16 MBps) [2024-12-16T21:26:46.151Z] Copying: 671/1024 [MB] (15 MBps) [2024-12-16T21:26:47.124Z] Copying: 687/1024 [MB] (15 MBps) [2024-12-16T21:26:48.057Z] Copying: 703/1024 [MB] (16 MBps) [2024-12-16T21:26:49.431Z] Copying: 719/1024 [MB] (15 MBps) [2024-12-16T21:26:50.364Z] Copying: 738/1024 [MB] (19 MBps) [2024-12-16T21:26:51.298Z] Copying: 754/1024 [MB] (16 MBps) [2024-12-16T21:26:52.232Z] Copying: 770/1024 [MB] (16 MBps) [2024-12-16T21:26:53.166Z] Copying: 786/1024 [MB] (15 MBps) [2024-12-16T21:26:54.102Z] Copying: 806/1024 [MB] (20 MBps) [2024-12-16T21:26:55.044Z] Copying: 822/1024 [MB] (15 MBps) [2024-12-16T21:26:56.416Z] Copying: 833/1024 [MB] (10 MBps) [2024-12-16T21:26:57.350Z] Copying: 849/1024 [MB] (15 MBps) [2024-12-16T21:26:58.284Z] Copying: 877/1024 [MB] (28 MBps) [2024-12-16T21:26:59.216Z] Copying: 905/1024 [MB] (27 MBps) [2024-12-16T21:27:00.149Z] Copying: 934/1024 [MB] (29 MBps) [2024-12-16T21:27:01.082Z] Copying: 952/1024 [MB] (17 MBps) [2024-12-16T21:27:02.457Z] Copying: 966/1024 [MB] (14 MBps) [2024-12-16T21:27:03.391Z] Copying: 992/1024 [MB] (25 MBps) [2024-12-16T21:27:04.329Z] Copying: 1006/1024 [MB] (14 MBps) [2024-12-16T21:27:04.329Z] Copying: 1022/1024 [MB] (15 MBps) [2024-12-16T21:27:04.329Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 21:27:04.157080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.157124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.629 [2024-12-16 21:27:04.157137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:14.629 [2024-12-16 21:27:04.157149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.157169] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:14.629 [2024-12-16 21:27:04.157652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.157678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.629 [2024-12-16 21:27:04.157689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:21:14.629 [2024-12-16 21:27:04.157696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.160248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.160284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.629 [2024-12-16 21:27:04.160294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:21:14.629 [2024-12-16 21:27:04.160302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.180485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.180528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.629 [2024-12-16 21:27:04.180538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.166 ms 00:21:14.629 [2024-12-16 21:27:04.180552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.186636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.186662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.629 [2024-12-16 21:27:04.186672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.054 ms 00:21:14.629 [2024-12-16 21:27:04.186679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.188988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.189022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.629 [2024-12-16 21:27:04.189031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:21:14.629 [2024-12-16 21:27:04.189038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.193042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.193084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.629 [2024-12-16 21:27:04.193093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:21:14.629 [2024-12-16 21:27:04.193101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.193217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.193226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.629 [2024-12-16 21:27:04.193234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:14.629 [2024-12-16 21:27:04.193241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.196431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.196477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:14.629 [2024-12-16 21:27:04.196487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:21:14.629 [2024-12-16 21:27:04.196494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.198901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.198935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:14.629 [2024-12-16 21:27:04.198944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.375 ms 00:21:14.629 [2024-12-16 21:27:04.198950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.200589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.200621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.629 [2024-12-16 21:27:04.200641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:21:14.629 [2024-12-16 21:27:04.200648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.202441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.629 [2024-12-16 21:27:04.202473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.629 [2024-12-16 21:27:04.202483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:21:14.629 [2024-12-16 21:27:04.202490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.629 [2024-12-16 21:27:04.202518] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.629 [2024-12-16 21:27:04.202533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.629 [2024-12-16 21:27:04.202722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.202999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.630 [2024-12-16 21:27:04.203280] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.630 [2024-12-16 21:27:04.203288] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59037ec0-dba9-4cf7-9db3-267d4c6f8c8f 00:21:14.630 [2024-12-16 21:27:04.203295] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:14.630 [2024-12-16 21:27:04.203302] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:14.630 [2024-12-16 21:27:04.203309] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:14.630 [2024-12-16 21:27:04.203316] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:14.631 [2024-12-16 21:27:04.203323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.631 [2024-12-16 21:27:04.203330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.631 [2024-12-16 21:27:04.203337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.631 [2024-12-16 21:27:04.203343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.631 [2024-12-16 21:27:04.203349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.631 [2024-12-16 21:27:04.203355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.631 [2024-12-16 21:27:04.203362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.631 [2024-12-16 21:27:04.203375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:21:14.631 [2024-12-16 21:27:04.203382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.204840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.631 [2024-12-16 21:27:04.204867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.631 [2024-12-16 21:27:04.204876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:21:14.631 [2024-12-16 21:27:04.204883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.204963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.631 [2024-12-16 21:27:04.204971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.631 [2024-12-16 21:27:04.204979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:14.631 [2024-12-16 21:27:04.204990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.209920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.209954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.631 [2024-12-16 21:27:04.209963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.209970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.210019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.210026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.631 [2024-12-16 21:27:04.210034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.210040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.210087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.210096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.631 [2024-12-16 21:27:04.210103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.210110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.210124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.210135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.631 [2024-12-16 21:27:04.210146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.210152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.218927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.218966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.631 [2024-12-16 21:27:04.218976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.218983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.226265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.631 [2024-12-16 21:27:04.226275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.226282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.226313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.631 [2024-12-16 21:27:04.226321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.226333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.226388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.631 [2024-12-16 21:27:04.226396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.226405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.226473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.631 [2024-12-16 21:27:04.226481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.226488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.226523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:14.631 [2024-12-16 21:27:04.226531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.226540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.226587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.631 [2024-12-16 21:27:04.226595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.226602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.631 [2024-12-16 21:27:04.226665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.631 [2024-12-16 21:27:04.226673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.631 [2024-12-16 21:27:04.226683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.631 [2024-12-16 21:27:04.226802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.704 ms, result 0 00:21:14.892 00:21:14.892 00:21:14.892 21:27:04 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:14.892 [2024-12-16 21:27:04.494815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:14.892 [2024-12-16 21:27:04.494939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90826 ] 00:21:15.153 [2024-12-16 21:27:04.644617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.153 [2024-12-16 21:27:04.672299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.153 [2024-12-16 21:27:04.783414] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:15.153 [2024-12-16 21:27:04.783514] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:15.416 [2024-12-16 21:27:04.944843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.416 [2024-12-16 21:27:04.944906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:15.416 [2024-12-16 21:27:04.944922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:15.416 [2024-12-16 21:27:04.944931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.416 [2024-12-16 21:27:04.944986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.416 [2024-12-16 21:27:04.944997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.416 [2024-12-16 21:27:04.945006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:15.416 [2024-12-16 21:27:04.945020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.416 [2024-12-16 21:27:04.945067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:15.416 [2024-12-16 21:27:04.945371] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:15.416 [2024-12-16 21:27:04.945398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.416 [2024-12-16 21:27:04.945409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.416 [2024-12-16 21:27:04.945420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:21:15.416 [2024-12-16 21:27:04.945432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.416 [2024-12-16 21:27:04.947282] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:15.416 [2024-12-16 21:27:04.951042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.416 [2024-12-16 21:27:04.951098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:15.416 [2024-12-16 21:27:04.951111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 00:21:15.416 [2024-12-16 21:27:04.951128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.951202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.951216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:15.417 [2024-12-16 21:27:04.951229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:15.417 [2024-12-16 21:27:04.951237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.959335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.959378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.417 [2024-12-16 21:27:04.959395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.053 ms 00:21:15.417 [2024-12-16 21:27:04.959407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.959507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.959521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.417 [2024-12-16 21:27:04.959530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:15.417 [2024-12-16 21:27:04.959538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.959601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.959619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:15.417 [2024-12-16 21:27:04.959677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:15.417 [2024-12-16 21:27:04.959687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.959710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:15.417 [2024-12-16 21:27:04.961819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.961857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.417 [2024-12-16 21:27:04.961868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:21:15.417 [2024-12-16 21:27:04.961875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.961919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.961927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:15.417 [2024-12-16 21:27:04.961936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:15.417 [2024-12-16 21:27:04.961946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.961968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:15.417 [2024-12-16 21:27:04.961990] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:15.417 [2024-12-16 21:27:04.962036] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:15.417 [2024-12-16 21:27:04.962054] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:15.417 [2024-12-16 21:27:04.962159] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:15.417 [2024-12-16 21:27:04.962170] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:15.417 [2024-12-16 21:27:04.962184] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:15.417 [2024-12-16 21:27:04.962194] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:15.417 [2024-12-16 21:27:04.962222] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:15.417 [2024-12-16 21:27:04.962230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:15.417 [2024-12-16 21:27:04.962238] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:15.417 [2024-12-16 21:27:04.962246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.962254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:15.417 [2024-12-16 21:27:04.962263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:15.417 [2024-12-16 21:27:04.962273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.962365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.417 [2024-12-16 21:27:04.962383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:15.417 [2024-12-16 21:27:04.962392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:15.417 [2024-12-16 21:27:04.962403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.417 [2024-12-16 21:27:04.962507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:15.417 [2024-12-16 21:27:04.962519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:15.417 [2024-12-16 21:27:04.962529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:15.417 [2024-12-16 21:27:04.962555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:15.417 [2024-12-16 21:27:04.962579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.417 [2024-12-16 21:27:04.962594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:15.417 [2024-12-16 21:27:04.962605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:15.417 [2024-12-16 21:27:04.962613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.417 [2024-12-16 21:27:04.962621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:15.417 [2024-12-16 21:27:04.962649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:15.417 [2024-12-16 21:27:04.962658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:15.417 [2024-12-16 21:27:04.962674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:15.417 [2024-12-16 21:27:04.962700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:15.417 [2024-12-16 21:27:04.962723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:15.417 [2024-12-16 21:27:04.962749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:15.417 [2024-12-16 21:27:04.962777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:15.417 [2024-12-16 21:27:04.962800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.417 [2024-12-16 21:27:04.962816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:15.417 [2024-12-16 21:27:04.962824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:15.417 [2024-12-16 21:27:04.962832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.417 [2024-12-16 21:27:04.962840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:15.417 [2024-12-16 21:27:04.962846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:15.417 [2024-12-16 21:27:04.962852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:15.417 [2024-12-16 21:27:04.962866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:15.417 [2024-12-16 21:27:04.962873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962882] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:15.417 [2024-12-16 21:27:04.962893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:15.417 [2024-12-16 21:27:04.962901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.417 [2024-12-16 21:27:04.962918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:15.417 [2024-12-16 21:27:04.962925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:15.417 [2024-12-16 21:27:04.962933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:15.417 [2024-12-16 21:27:04.962941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:15.417 [2024-12-16 21:27:04.962947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:15.417 [2024-12-16 21:27:04.962954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:15.417 [2024-12-16 21:27:04.962963] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:15.417 [2024-12-16 21:27:04.962973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.417 [2024-12-16 21:27:04.962981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:15.417 [2024-12-16 21:27:04.962989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:15.417 [2024-12-16 21:27:04.962996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:15.417 [2024-12-16 21:27:04.963003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:15.417 [2024-12-16 21:27:04.963013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:15.418 [2024-12-16 21:27:04.963020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:15.418 [2024-12-16 21:27:04.963027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:15.418 [2024-12-16 21:27:04.963034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:15.418 [2024-12-16 21:27:04.963041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:15.418 [2024-12-16 21:27:04.963054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:15.418 [2024-12-16 21:27:04.963061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:15.418 [2024-12-16 21:27:04.963070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:15.418 [2024-12-16 21:27:04.963077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:15.418 [2024-12-16 21:27:04.963085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:15.418 [2024-12-16 21:27:04.963092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:15.418 [2024-12-16 21:27:04.963101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.418 [2024-12-16 21:27:04.963109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:15.418 [2024-12-16 21:27:04.963116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:15.418 [2024-12-16 21:27:04.963123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:15.418 [2024-12-16 21:27:04.963130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:15.418 [2024-12-16 21:27:04.963139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:04.963147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:15.418 [2024-12-16 21:27:04.963154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:21:15.418 [2024-12-16 21:27:04.963165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:04.977704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:04.977752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.418 [2024-12-16 21:27:04.977766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.493 ms 00:21:15.418 [2024-12-16 21:27:04.977775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:04.977865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:04.977875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:15.418 [2024-12-16 21:27:04.977886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:15.418 [2024-12-16 21:27:04.977895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.000410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.000481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:15.418 [2024-12-16 21:27:05.000500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.456 ms 00:21:15.418 [2024-12-16 21:27:05.000523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.000586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.000602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:15.418 [2024-12-16 21:27:05.000617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:15.418 [2024-12-16 21:27:05.000650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.001311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.001365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:15.418 [2024-12-16 21:27:05.001383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:21:15.418 [2024-12-16 21:27:05.001397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.001612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.001660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:15.418 [2024-12-16 21:27:05.001681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:21:15.418 [2024-12-16 21:27:05.001694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.010470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.010508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:15.418 [2024-12-16 21:27:05.010518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.747 ms 00:21:15.418 [2024-12-16 21:27:05.010526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.014494] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:15.418 [2024-12-16 21:27:05.014549] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:15.418 [2024-12-16 21:27:05.014565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.014574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:15.418 [2024-12-16 21:27:05.014583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.890 ms 00:21:15.418 [2024-12-16 21:27:05.014590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.031132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.031177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:15.418 [2024-12-16 21:27:05.031190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.466 ms 00:21:15.418 [2024-12-16 21:27:05.031198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.034363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.034416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:15.418 [2024-12-16 21:27:05.034427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:21:15.418 [2024-12-16 21:27:05.034433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.036872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.036918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:15.418 [2024-12-16 21:27:05.036929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.387 ms 00:21:15.418 [2024-12-16 21:27:05.036937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.037321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.037345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:15.418 [2024-12-16 21:27:05.037356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:21:15.418 [2024-12-16 21:27:05.037371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.063402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.063459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:15.418 [2024-12-16 21:27:05.063472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.009 ms 00:21:15.418 [2024-12-16 21:27:05.063481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.071645] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:15.418 [2024-12-16 21:27:05.074566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.074607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:15.418 [2024-12-16 21:27:05.074622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.035 ms 00:21:15.418 [2024-12-16 21:27:05.074658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.074736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.074748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:15.418 [2024-12-16 21:27:05.074765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:15.418 [2024-12-16 21:27:05.074774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.074843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.074856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:15.418 [2024-12-16 21:27:05.074865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:15.418 [2024-12-16 21:27:05.074872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.074893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.074902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:15.418 [2024-12-16 21:27:05.074911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:15.418 [2024-12-16 21:27:05.074920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.074956] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:15.418 [2024-12-16 21:27:05.074966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.074974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:15.418 [2024-12-16 21:27:05.074984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:15.418 [2024-12-16 21:27:05.074992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.080462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.080510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:15.418 [2024-12-16 21:27:05.080522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.451 ms 00:21:15.418 [2024-12-16 21:27:05.080530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.080607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.418 [2024-12-16 21:27:05.080643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:15.418 [2024-12-16 21:27:05.080656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:15.418 [2024-12-16 21:27:05.080667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.418 [2024-12-16 21:27:05.082000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.703 ms, result 0 00:21:16.803  [2024-12-16T21:27:07.447Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-16T21:27:08.390Z] Copying: 30/1024 [MB] (11 MBps) [2024-12-16T21:27:09.325Z] Copying: 51/1024 [MB] (20 MBps) [2024-12-16T21:27:10.265Z] Copying: 65/1024 [MB] (13 MBps) [2024-12-16T21:27:11.652Z] Copying: 77/1024 [MB] (12 MBps) [2024-12-16T21:27:12.593Z] Copying: 97/1024 [MB] (19 MBps) [2024-12-16T21:27:13.579Z] Copying: 110/1024 [MB] (13 MBps) [2024-12-16T21:27:14.523Z] Copying: 125/1024 [MB] (14 MBps) [2024-12-16T21:27:15.459Z] Copying: 135/1024 [MB] (10 MBps) [2024-12-16T21:27:16.399Z] Copying: 151/1024 [MB] (15 MBps) [2024-12-16T21:27:17.338Z] Copying: 172/1024 [MB] (20 MBps) [2024-12-16T21:27:18.279Z] Copying: 184/1024 [MB] (12 MBps) [2024-12-16T21:27:19.661Z] Copying: 196/1024 [MB] (12 MBps) [2024-12-16T21:27:20.596Z] Copying: 208/1024 [MB] (11 MBps) [2024-12-16T21:27:21.530Z] Copying: 222/1024 [MB] (14 MBps) [2024-12-16T21:27:22.466Z] Copying: 237/1024 [MB] (14 MBps) [2024-12-16T21:27:23.407Z] Copying: 253/1024 [MB] (16 MBps) [2024-12-16T21:27:24.344Z] Copying: 268/1024 [MB] (14 MBps) [2024-12-16T21:27:25.284Z] Copying: 280/1024 [MB] (12 MBps) [2024-12-16T21:27:26.661Z] Copying: 292/1024 [MB] (11 MBps) [2024-12-16T21:27:27.601Z] Copying: 304/1024 [MB] (11 MBps) [2024-12-16T21:27:28.541Z] Copying: 318/1024 [MB] (14 MBps) [2024-12-16T21:27:29.483Z] Copying: 330/1024 [MB] (11 MBps) [2024-12-16T21:27:30.421Z] Copying: 341/1024 [MB] (10 MBps) [2024-12-16T21:27:31.355Z] Copying: 353/1024 [MB] (11 MBps) [2024-12-16T21:27:32.290Z] Copying: 367/1024 [MB] (14 MBps) [2024-12-16T21:27:33.670Z] Copying: 382/1024 [MB] (14 MBps) [2024-12-16T21:27:34.611Z] Copying: 397/1024 [MB] (14 MBps) [2024-12-16T21:27:35.552Z] Copying: 407/1024 [MB] (10 MBps) [2024-12-16T21:27:36.490Z] Copying: 418/1024 [MB] (10 MBps) [2024-12-16T21:27:37.433Z] Copying: 430/1024 [MB] (11 MBps) [2024-12-16T21:27:38.369Z] Copying: 440/1024 [MB] (10 MBps) [2024-12-16T21:27:39.311Z] Copying: 453/1024 [MB] (12 MBps) [2024-12-16T21:27:40.691Z] Copying: 463/1024 [MB] (10 MBps) [2024-12-16T21:27:41.627Z] Copying: 480/1024 [MB] (17 MBps) [2024-12-16T21:27:42.580Z] Copying: 492/1024 [MB] (12 MBps) [2024-12-16T21:27:43.554Z] Copying: 506/1024 [MB] (13 MBps) [2024-12-16T21:27:44.498Z] Copying: 520/1024 [MB] (13 MBps) [2024-12-16T21:27:45.442Z] Copying: 536/1024 [MB] (16 MBps) [2024-12-16T21:27:46.384Z] Copying: 547/1024 [MB] (11 MBps) [2024-12-16T21:27:47.323Z] Copying: 562/1024 [MB] (14 MBps) [2024-12-16T21:27:48.266Z] Copying: 573/1024 [MB] (10 MBps) [2024-12-16T21:27:49.648Z] Copying: 585/1024 [MB] (11 MBps) [2024-12-16T21:27:50.588Z] Copying: 595/1024 [MB] (10 MBps) [2024-12-16T21:27:51.530Z] Copying: 608/1024 [MB] (12 MBps) [2024-12-16T21:27:52.464Z] Copying: 618/1024 [MB] (10 MBps) [2024-12-16T21:27:53.398Z] Copying: 630/1024 [MB] (12 MBps) [2024-12-16T21:27:54.338Z] Copying: 645/1024 [MB] (14 MBps) [2024-12-16T21:27:55.280Z] Copying: 657/1024 [MB] (11 MBps) [2024-12-16T21:27:56.666Z] Copying: 669/1024 [MB] (11 MBps) [2024-12-16T21:27:57.605Z] Copying: 680/1024 [MB] (11 MBps) [2024-12-16T21:27:58.539Z] Copying: 691/1024 [MB] (11 MBps) [2024-12-16T21:27:59.473Z] Copying: 705/1024 [MB] (14 MBps) [2024-12-16T21:28:00.411Z] Copying: 719/1024 [MB] (14 MBps) [2024-12-16T21:28:01.354Z] Copying: 733/1024 [MB] (14 MBps) [2024-12-16T21:28:02.292Z] Copying: 746/1024 [MB] (12 MBps) [2024-12-16T21:28:03.667Z] Copying: 758/1024 [MB] (12 MBps) [2024-12-16T21:28:04.610Z] Copying: 772/1024 [MB] (14 MBps) [2024-12-16T21:28:05.546Z] Copying: 783/1024 [MB] (10 MBps) [2024-12-16T21:28:06.488Z] Copying: 795/1024 [MB] (11 MBps) [2024-12-16T21:28:07.431Z] Copying: 810/1024 [MB] (15 MBps) [2024-12-16T21:28:08.370Z] Copying: 820/1024 [MB] (10 MBps) [2024-12-16T21:28:09.311Z] Copying: 838/1024 [MB] (17 MBps) [2024-12-16T21:28:10.694Z] Copying: 857/1024 [MB] (19 MBps) [2024-12-16T21:28:11.272Z] Copying: 871/1024 [MB] (14 MBps) [2024-12-16T21:28:12.695Z] Copying: 892/1024 [MB] (20 MBps) [2024-12-16T21:28:13.268Z] Copying: 909/1024 [MB] (16 MBps) [2024-12-16T21:28:14.655Z] Copying: 923/1024 [MB] (14 MBps) [2024-12-16T21:28:15.595Z] Copying: 941/1024 [MB] (17 MBps) [2024-12-16T21:28:16.537Z] Copying: 960/1024 [MB] (19 MBps) [2024-12-16T21:28:17.477Z] Copying: 981/1024 [MB] (21 MBps) [2024-12-16T21:28:18.419Z] Copying: 997/1024 [MB] (15 MBps) [2024-12-16T21:28:18.990Z] Copying: 1015/1024 [MB] (17 MBps) [2024-12-16T21:28:18.990Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 21:28:18.724878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.724923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:29.290 [2024-12-16 21:28:18.724937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:29.290 [2024-12-16 21:28:18.724951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.724974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:29.290 [2024-12-16 21:28:18.725407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.725438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:29.290 [2024-12-16 21:28:18.725449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:22:29.290 [2024-12-16 21:28:18.725458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.725662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.725679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:29.290 [2024-12-16 21:28:18.725690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:22:29.290 [2024-12-16 21:28:18.725703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.728675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.728697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:29.290 [2024-12-16 21:28:18.728707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:22:29.290 [2024-12-16 21:28:18.728717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.733315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.733346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:29.290 [2024-12-16 21:28:18.733356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.580 ms 00:22:29.290 [2024-12-16 21:28:18.733365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.734707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.734732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:29.290 [2024-12-16 21:28:18.734742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:22:29.290 [2024-12-16 21:28:18.734750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.738837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.738862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:29.290 [2024-12-16 21:28:18.738872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.057 ms 00:22:29.290 [2024-12-16 21:28:18.738880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.738989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.739006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:29.290 [2024-12-16 21:28:18.739015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:29.290 [2024-12-16 21:28:18.739029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.741092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.741130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:29.290 [2024-12-16 21:28:18.741140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.048 ms 00:22:29.290 [2024-12-16 21:28:18.741147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.743090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.743115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:29.290 [2024-12-16 21:28:18.743124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:22:29.290 [2024-12-16 21:28:18.743132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.744719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.744742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:29.290 [2024-12-16 21:28:18.744751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:22:29.290 [2024-12-16 21:28:18.744758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.746488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.290 [2024-12-16 21:28:18.746513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:29.290 [2024-12-16 21:28:18.746522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:22:29.290 [2024-12-16 21:28:18.746530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.290 [2024-12-16 21:28:18.746558] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:29.290 [2024-12-16 21:28:18.746572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:29.290 [2024-12-16 21:28:18.746682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.746999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:29.291 [2024-12-16 21:28:18.747419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:29.291 [2024-12-16 21:28:18.747435] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59037ec0-dba9-4cf7-9db3-267d4c6f8c8f 00:22:29.292 [2024-12-16 21:28:18.747445] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:29.292 [2024-12-16 21:28:18.747454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:29.292 [2024-12-16 21:28:18.747462] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:29.292 [2024-12-16 21:28:18.747472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:29.292 [2024-12-16 21:28:18.747477] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:29.292 [2024-12-16 21:28:18.747489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:29.292 [2024-12-16 21:28:18.747495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:29.292 [2024-12-16 21:28:18.747499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:29.292 [2024-12-16 21:28:18.747504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:29.292 [2024-12-16 21:28:18.747509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.292 [2024-12-16 21:28:18.747521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:29.292 [2024-12-16 21:28:18.747528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:22:29.292 [2024-12-16 21:28:18.747533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.748746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.292 [2024-12-16 21:28:18.748768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:29.292 [2024-12-16 21:28:18.748778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:22:29.292 [2024-12-16 21:28:18.748786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.748867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.292 [2024-12-16 21:28:18.748879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:29.292 [2024-12-16 21:28:18.748889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:29.292 [2024-12-16 21:28:18.748898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.752826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.752852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.292 [2024-12-16 21:28:18.752868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.752880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.752935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.752946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.292 [2024-12-16 21:28:18.752956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.752965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.753004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.753020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.292 [2024-12-16 21:28:18.753033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.753042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.753062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.753075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.292 [2024-12-16 21:28:18.753085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.753093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.760301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.760330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.292 [2024-12-16 21:28:18.760340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.760350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.766350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.292 [2024-12-16 21:28:18.766362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.766371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.766429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.292 [2024-12-16 21:28:18.766439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.766448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.766510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.292 [2024-12-16 21:28:18.766525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.766534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.766612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.292 [2024-12-16 21:28:18.766622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.766642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.766686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:29.292 [2024-12-16 21:28:18.766699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.766707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.766752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.292 [2024-12-16 21:28:18.766760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.766768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.292 [2024-12-16 21:28:18.766822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.292 [2024-12-16 21:28:18.766836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.292 [2024-12-16 21:28:18.766850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.292 [2024-12-16 21:28:18.766961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.056 ms, result 0 00:22:29.292 00:22:29.292 00:22:29.292 21:28:18 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:31.843 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:31.843 21:28:21 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:31.843 [2024-12-16 21:28:21.232844] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:31.843 [2024-12-16 21:28:21.232966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91619 ] 00:22:31.843 [2024-12-16 21:28:21.380107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.843 [2024-12-16 21:28:21.405098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.843 [2024-12-16 21:28:21.521844] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:31.843 [2024-12-16 21:28:21.521938] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:32.106 [2024-12-16 21:28:21.678308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.678356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:32.106 [2024-12-16 21:28:21.678374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:32.106 [2024-12-16 21:28:21.678382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.678426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.678436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.106 [2024-12-16 21:28:21.678445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:32.106 [2024-12-16 21:28:21.678463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.678489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:32.106 [2024-12-16 21:28:21.678811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:32.106 [2024-12-16 21:28:21.678837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.678850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.106 [2024-12-16 21:28:21.678862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:22:32.106 [2024-12-16 21:28:21.678870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.680121] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:32.106 [2024-12-16 21:28:21.682997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.683041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:32.106 [2024-12-16 21:28:21.683052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:22:32.106 [2024-12-16 21:28:21.683067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.683123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.683133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:32.106 [2024-12-16 21:28:21.683144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:32.106 [2024-12-16 21:28:21.683151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.688949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.688981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.106 [2024-12-16 21:28:21.688996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.736 ms 00:22:32.106 [2024-12-16 21:28:21.689004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.689091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.689100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.106 [2024-12-16 21:28:21.689118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:32.106 [2024-12-16 21:28:21.689126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.689164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.689174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:32.106 [2024-12-16 21:28:21.689182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:32.106 [2024-12-16 21:28:21.689193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.689221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:32.106 [2024-12-16 21:28:21.690792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.690824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.106 [2024-12-16 21:28:21.690834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:22:32.106 [2024-12-16 21:28:21.690841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.690873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.690882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:32.106 [2024-12-16 21:28:21.690890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:32.106 [2024-12-16 21:28:21.690899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.690918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:32.106 [2024-12-16 21:28:21.690943] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:32.106 [2024-12-16 21:28:21.690981] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:32.106 [2024-12-16 21:28:21.690998] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:32.106 [2024-12-16 21:28:21.691099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:32.106 [2024-12-16 21:28:21.691113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:32.106 [2024-12-16 21:28:21.691127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:32.106 [2024-12-16 21:28:21.691137] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:32.106 [2024-12-16 21:28:21.691146] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:32.106 [2024-12-16 21:28:21.691158] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:32.106 [2024-12-16 21:28:21.691165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:32.106 [2024-12-16 21:28:21.691172] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:32.106 [2024-12-16 21:28:21.691180] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:32.106 [2024-12-16 21:28:21.691192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.691199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:32.106 [2024-12-16 21:28:21.691211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:22:32.106 [2024-12-16 21:28:21.691218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.691309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.106 [2024-12-16 21:28:21.691317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:32.106 [2024-12-16 21:28:21.691324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:32.106 [2024-12-16 21:28:21.691332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.106 [2024-12-16 21:28:21.691427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:32.106 [2024-12-16 21:28:21.691438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:32.106 [2024-12-16 21:28:21.691447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.106 [2024-12-16 21:28:21.691455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.106 [2024-12-16 21:28:21.691464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:32.106 [2024-12-16 21:28:21.691471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:32.106 [2024-12-16 21:28:21.691479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:32.106 [2024-12-16 21:28:21.691486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:32.106 [2024-12-16 21:28:21.691494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:32.106 [2024-12-16 21:28:21.691501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.106 [2024-12-16 21:28:21.691508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:32.106 [2024-12-16 21:28:21.691519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:32.106 [2024-12-16 21:28:21.691529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.106 [2024-12-16 21:28:21.691537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:32.106 [2024-12-16 21:28:21.691545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:32.106 [2024-12-16 21:28:21.691552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:32.107 [2024-12-16 21:28:21.691567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:32.107 [2024-12-16 21:28:21.691574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:32.107 [2024-12-16 21:28:21.691591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.107 [2024-12-16 21:28:21.691606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:32.107 [2024-12-16 21:28:21.691614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.107 [2024-12-16 21:28:21.691642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:32.107 [2024-12-16 21:28:21.691650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.107 [2024-12-16 21:28:21.691669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:32.107 [2024-12-16 21:28:21.691677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.107 [2024-12-16 21:28:21.691691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:32.107 [2024-12-16 21:28:21.691699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.107 [2024-12-16 21:28:21.691714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:32.107 [2024-12-16 21:28:21.691721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:32.107 [2024-12-16 21:28:21.691728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.107 [2024-12-16 21:28:21.691736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:32.107 [2024-12-16 21:28:21.691743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:32.107 [2024-12-16 21:28:21.691750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:32.107 [2024-12-16 21:28:21.691765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:32.107 [2024-12-16 21:28:21.691773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:32.107 [2024-12-16 21:28:21.691794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:32.107 [2024-12-16 21:28:21.691802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.107 [2024-12-16 21:28:21.691810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.107 [2024-12-16 21:28:21.691819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:32.107 [2024-12-16 21:28:21.691827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:32.107 [2024-12-16 21:28:21.691835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:32.107 [2024-12-16 21:28:21.691842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:32.107 [2024-12-16 21:28:21.691849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:32.107 [2024-12-16 21:28:21.691857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:32.107 [2024-12-16 21:28:21.691866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:32.107 [2024-12-16 21:28:21.691875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.107 [2024-12-16 21:28:21.691883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:32.107 [2024-12-16 21:28:21.691891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:32.107 [2024-12-16 21:28:21.691898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:32.107 [2024-12-16 21:28:21.691905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:32.107 [2024-12-16 21:28:21.691912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:32.107 [2024-12-16 21:28:21.691921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:32.107 [2024-12-16 21:28:21.691928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:32.107 [2024-12-16 21:28:21.691935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:32.107 [2024-12-16 21:28:21.691941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:32.107 [2024-12-16 21:28:21.691954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:32.107 [2024-12-16 21:28:21.691961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:32.107 [2024-12-16 21:28:21.691968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:32.107 [2024-12-16 21:28:21.691975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:32.107 [2024-12-16 21:28:21.691982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:32.107 [2024-12-16 21:28:21.691989] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:32.107 [2024-12-16 21:28:21.691998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.107 [2024-12-16 21:28:21.692006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:32.107 [2024-12-16 21:28:21.692013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:32.107 [2024-12-16 21:28:21.692020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:32.107 [2024-12-16 21:28:21.692027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:32.107 [2024-12-16 21:28:21.692035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.692044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:32.107 [2024-12-16 21:28:21.692052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:22:32.107 [2024-12-16 21:28:21.692061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.702494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.702531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.107 [2024-12-16 21:28:21.702541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.383 ms 00:22:32.107 [2024-12-16 21:28:21.702554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.702652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.702661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:32.107 [2024-12-16 21:28:21.702670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:32.107 [2024-12-16 21:28:21.702677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.722549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.722605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.107 [2024-12-16 21:28:21.722622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.822 ms 00:22:32.107 [2024-12-16 21:28:21.722651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.722707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.722722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.107 [2024-12-16 21:28:21.722735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:32.107 [2024-12-16 21:28:21.722746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.723176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.723216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.107 [2024-12-16 21:28:21.723232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:22:32.107 [2024-12-16 21:28:21.723245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.723437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.723453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.107 [2024-12-16 21:28:21.723467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:22:32.107 [2024-12-16 21:28:21.723480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.729713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.729753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.107 [2024-12-16 21:28:21.729766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:22:32.107 [2024-12-16 21:28:21.729786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.732860] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:32.107 [2024-12-16 21:28:21.732914] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:32.107 [2024-12-16 21:28:21.732934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.732945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:32.107 [2024-12-16 21:28:21.732957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:22:32.107 [2024-12-16 21:28:21.732967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.747746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.747797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:32.107 [2024-12-16 21:28:21.747808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.730 ms 00:22:32.107 [2024-12-16 21:28:21.747815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.107 [2024-12-16 21:28:21.749704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.107 [2024-12-16 21:28:21.749735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:32.107 [2024-12-16 21:28:21.749743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:22:32.108 [2024-12-16 21:28:21.749750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.751272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.751304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:32.108 [2024-12-16 21:28:21.751313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:22:32.108 [2024-12-16 21:28:21.751320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.751624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.751647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:32.108 [2024-12-16 21:28:21.751656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:22:32.108 [2024-12-16 21:28:21.751663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.768726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.768768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:32.108 [2024-12-16 21:28:21.768779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.042 ms 00:22:32.108 [2024-12-16 21:28:21.768787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.776177] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:32.108 [2024-12-16 21:28:21.778488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.778527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:32.108 [2024-12-16 21:28:21.778538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.666 ms 00:22:32.108 [2024-12-16 21:28:21.778546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.778595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.778606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:32.108 [2024-12-16 21:28:21.778621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:32.108 [2024-12-16 21:28:21.778645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.778725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.778737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:32.108 [2024-12-16 21:28:21.778746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:32.108 [2024-12-16 21:28:21.778753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.778779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.778787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:32.108 [2024-12-16 21:28:21.778795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:32.108 [2024-12-16 21:28:21.778802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.778829] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:32.108 [2024-12-16 21:28:21.778841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.778850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:32.108 [2024-12-16 21:28:21.778859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:32.108 [2024-12-16 21:28:21.778866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.782310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.782344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:32.108 [2024-12-16 21:28:21.782353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.421 ms 00:22:32.108 [2024-12-16 21:28:21.782361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.782422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.108 [2024-12-16 21:28:21.782431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:32.108 [2024-12-16 21:28:21.782439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:32.108 [2024-12-16 21:28:21.782451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.108 [2024-12-16 21:28:21.783356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.681 ms, result 0 00:22:33.491  [2024-12-16T21:28:24.132Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-16T21:28:25.075Z] Copying: 37/1024 [MB] (18 MBps) [2024-12-16T21:28:26.017Z] Copying: 59/1024 [MB] (21 MBps) [2024-12-16T21:28:26.958Z] Copying: 80/1024 [MB] (21 MBps) [2024-12-16T21:28:27.897Z] Copying: 97/1024 [MB] (16 MBps) [2024-12-16T21:28:28.830Z] Copying: 109/1024 [MB] (12 MBps) [2024-12-16T21:28:30.203Z] Copying: 129/1024 [MB] (20 MBps) [2024-12-16T21:28:31.136Z] Copying: 145/1024 [MB] (15 MBps) [2024-12-16T21:28:32.069Z] Copying: 160/1024 [MB] (15 MBps) [2024-12-16T21:28:33.004Z] Copying: 180/1024 [MB] (19 MBps) [2024-12-16T21:28:33.937Z] Copying: 196/1024 [MB] (16 MBps) [2024-12-16T21:28:34.872Z] Copying: 213/1024 [MB] (16 MBps) [2024-12-16T21:28:35.806Z] Copying: 229/1024 [MB] (16 MBps) [2024-12-16T21:28:37.184Z] Copying: 251/1024 [MB] (21 MBps) [2024-12-16T21:28:38.118Z] Copying: 267/1024 [MB] (15 MBps) [2024-12-16T21:28:39.051Z] Copying: 282/1024 [MB] (15 MBps) [2024-12-16T21:28:39.993Z] Copying: 299/1024 [MB] (16 MBps) [2024-12-16T21:28:41.014Z] Copying: 313/1024 [MB] (14 MBps) [2024-12-16T21:28:41.958Z] Copying: 323/1024 [MB] (10 MBps) [2024-12-16T21:28:42.897Z] Copying: 334/1024 [MB] (10 MBps) [2024-12-16T21:28:43.832Z] Copying: 344/1024 [MB] (10 MBps) [2024-12-16T21:28:45.219Z] Copying: 363/1024 [MB] (19 MBps) [2024-12-16T21:28:46.153Z] Copying: 374/1024 [MB] (10 MBps) [2024-12-16T21:28:47.088Z] Copying: 393/1024 [MB] (19 MBps) [2024-12-16T21:28:48.029Z] Copying: 409/1024 [MB] (15 MBps) [2024-12-16T21:28:48.968Z] Copying: 422/1024 [MB] (12 MBps) [2024-12-16T21:28:49.915Z] Copying: 436/1024 [MB] (14 MBps) [2024-12-16T21:28:50.851Z] Copying: 453/1024 [MB] (16 MBps) [2024-12-16T21:28:52.224Z] Copying: 470/1024 [MB] (17 MBps) [2024-12-16T21:28:53.163Z] Copying: 495/1024 [MB] (24 MBps) [2024-12-16T21:28:54.106Z] Copying: 510/1024 [MB] (15 MBps) [2024-12-16T21:28:55.048Z] Copying: 525/1024 [MB] (14 MBps) [2024-12-16T21:28:55.990Z] Copying: 542/1024 [MB] (17 MBps) [2024-12-16T21:28:56.932Z] Copying: 560/1024 [MB] (17 MBps) [2024-12-16T21:28:57.875Z] Copying: 572/1024 [MB] (12 MBps) [2024-12-16T21:28:58.819Z] Copying: 592/1024 [MB] (19 MBps) [2024-12-16T21:29:00.193Z] Copying: 606/1024 [MB] (13 MBps) [2024-12-16T21:29:01.126Z] Copying: 623/1024 [MB] (17 MBps) [2024-12-16T21:29:02.065Z] Copying: 639/1024 [MB] (15 MBps) [2024-12-16T21:29:03.003Z] Copying: 654/1024 [MB] (15 MBps) [2024-12-16T21:29:03.938Z] Copying: 666/1024 [MB] (11 MBps) [2024-12-16T21:29:04.871Z] Copying: 681/1024 [MB] (15 MBps) [2024-12-16T21:29:05.805Z] Copying: 697/1024 [MB] (15 MBps) [2024-12-16T21:29:07.192Z] Copying: 712/1024 [MB] (15 MBps) [2024-12-16T21:29:08.126Z] Copying: 728/1024 [MB] (15 MBps) [2024-12-16T21:29:09.060Z] Copying: 744/1024 [MB] (15 MBps) [2024-12-16T21:29:10.086Z] Copying: 759/1024 [MB] (15 MBps) [2024-12-16T21:29:11.019Z] Copying: 775/1024 [MB] (15 MBps) [2024-12-16T21:29:11.953Z] Copying: 790/1024 [MB] (15 MBps) [2024-12-16T21:29:12.890Z] Copying: 806/1024 [MB] (15 MBps) [2024-12-16T21:29:13.834Z] Copying: 829/1024 [MB] (23 MBps) [2024-12-16T21:29:15.221Z] Copying: 842/1024 [MB] (12 MBps) [2024-12-16T21:29:16.165Z] Copying: 857/1024 [MB] (15 MBps) [2024-12-16T21:29:17.110Z] Copying: 871/1024 [MB] (13 MBps) [2024-12-16T21:29:18.051Z] Copying: 882/1024 [MB] (10 MBps) [2024-12-16T21:29:18.985Z] Copying: 892/1024 [MB] (10 MBps) [2024-12-16T21:29:19.918Z] Copying: 907/1024 [MB] (14 MBps) [2024-12-16T21:29:20.852Z] Copying: 930/1024 [MB] (22 MBps) [2024-12-16T21:29:22.225Z] Copying: 947/1024 [MB] (17 MBps) [2024-12-16T21:29:23.159Z] Copying: 962/1024 [MB] (14 MBps) [2024-12-16T21:29:24.097Z] Copying: 977/1024 [MB] (14 MBps) [2024-12-16T21:29:25.041Z] Copying: 999/1024 [MB] (22 MBps) [2024-12-16T21:29:25.984Z] Copying: 1010/1024 [MB] (10 MBps) [2024-12-16T21:29:26.243Z] Copying: 1023/1024 [MB] (13 MBps) [2024-12-16T21:29:26.243Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-16 21:29:26.229229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.543 [2024-12-16 21:29:26.229820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:36.543 [2024-12-16 21:29:26.229846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.543 [2024-12-16 21:29:26.229855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.543 [2024-12-16 21:29:26.233087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.543 [2024-12-16 21:29:26.236939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.543 [2024-12-16 21:29:26.236982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:36.543 [2024-12-16 21:29:26.237000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.807 ms 00:23:36.543 [2024-12-16 21:29:26.237008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.804 [2024-12-16 21:29:26.248283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.804 [2024-12-16 21:29:26.248325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:36.804 [2024-12-16 21:29:26.248337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.248 ms 00:23:36.804 [2024-12-16 21:29:26.248345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.804 [2024-12-16 21:29:26.269918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.805 [2024-12-16 21:29:26.269953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:36.805 [2024-12-16 21:29:26.269965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.557 ms 00:23:36.805 [2024-12-16 21:29:26.269974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.805 [2024-12-16 21:29:26.276041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.805 [2024-12-16 21:29:26.276070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:36.805 [2024-12-16 21:29:26.276081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.033 ms 00:23:36.805 [2024-12-16 21:29:26.276096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.805 [2024-12-16 21:29:26.278153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.805 [2024-12-16 21:29:26.278185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:36.805 [2024-12-16 21:29:26.278194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:23:36.805 [2024-12-16 21:29:26.278202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.805 [2024-12-16 21:29:26.282279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.805 [2024-12-16 21:29:26.282314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.805 [2024-12-16 21:29:26.282323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:23:36.805 [2024-12-16 21:29:26.282342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.087 [2024-12-16 21:29:26.525235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.087 [2024-12-16 21:29:26.525287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:37.087 [2024-12-16 21:29:26.525299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 242.857 ms 00:23:37.087 [2024-12-16 21:29:26.525308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.087 [2024-12-16 21:29:26.529113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.087 [2024-12-16 21:29:26.529191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:37.087 [2024-12-16 21:29:26.529203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.787 ms 00:23:37.087 [2024-12-16 21:29:26.529212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.087 [2024-12-16 21:29:26.532268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.087 [2024-12-16 21:29:26.532315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:37.087 [2024-12-16 21:29:26.532325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:23:37.087 [2024-12-16 21:29:26.532333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.087 [2024-12-16 21:29:26.534494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.087 [2024-12-16 21:29:26.534541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:37.087 [2024-12-16 21:29:26.534552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:23:37.087 [2024-12-16 21:29:26.534559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.087 [2024-12-16 21:29:26.536849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.087 [2024-12-16 21:29:26.536898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:37.087 [2024-12-16 21:29:26.536908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:23:37.087 [2024-12-16 21:29:26.536916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.087 [2024-12-16 21:29:26.536954] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:37.087 [2024-12-16 21:29:26.536970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 87552 / 261120 wr_cnt: 1 state: open 00:23:37.087 [2024-12-16 21:29:26.536990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.536999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:37.087 [2024-12-16 21:29:26.537137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:37.088 [2024-12-16 21:29:26.537896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:37.088 [2024-12-16 21:29:26.537906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59037ec0-dba9-4cf7-9db3-267d4c6f8c8f 00:23:37.088 [2024-12-16 21:29:26.537914] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 87552 00:23:37.088 [2024-12-16 21:29:26.537929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 88512 00:23:37.088 [2024-12-16 21:29:26.537945] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 87552 00:23:37.088 [2024-12-16 21:29:26.537954] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0110 00:23:37.088 [2024-12-16 21:29:26.537961] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:37.088 [2024-12-16 21:29:26.537970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:37.088 [2024-12-16 21:29:26.537978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:37.088 [2024-12-16 21:29:26.537986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:37.088 [2024-12-16 21:29:26.537992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:37.088 [2024-12-16 21:29:26.538002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.088 [2024-12-16 21:29:26.538011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:37.089 [2024-12-16 21:29:26.538021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:23:37.089 [2024-12-16 21:29:26.538029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.540256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.089 [2024-12-16 21:29:26.540294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:37.089 [2024-12-16 21:29:26.540305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:23:37.089 [2024-12-16 21:29:26.540314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.540425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.089 [2024-12-16 21:29:26.540438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:37.089 [2024-12-16 21:29:26.540447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:37.089 [2024-12-16 21:29:26.540458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.547708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.547752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.089 [2024-12-16 21:29:26.547764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.547774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.547829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.547839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.089 [2024-12-16 21:29:26.547855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.547866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.547914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.547924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.089 [2024-12-16 21:29:26.547932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.547943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.547959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.547969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.089 [2024-12-16 21:29:26.547977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.547987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.561443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.561507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:37.089 [2024-12-16 21:29:26.561519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.561532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.572576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.572641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:37.089 [2024-12-16 21:29:26.572652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.572661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.572720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.572730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:37.089 [2024-12-16 21:29:26.572739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.572748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.572783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.572800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:37.089 [2024-12-16 21:29:26.572811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.572820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.572899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.572915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:37.089 [2024-12-16 21:29:26.572927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.572939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.572968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.572978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:37.089 [2024-12-16 21:29:26.572986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.572995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.573038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.573053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:37.089 [2024-12-16 21:29:26.573062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.573070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.573118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.089 [2024-12-16 21:29:26.573142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:37.089 [2024-12-16 21:29:26.573151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.089 [2024-12-16 21:29:26.573201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.089 [2024-12-16 21:29:26.573346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.036 ms, result 0 00:23:37.659 00:23:37.659 00:23:37.659 21:29:27 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:37.659 [2024-12-16 21:29:27.290756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:37.659 [2024-12-16 21:29:27.290893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92299 ] 00:23:37.920 [2024-12-16 21:29:27.438786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.920 [2024-12-16 21:29:27.458492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.920 [2024-12-16 21:29:27.551673] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:37.920 [2024-12-16 21:29:27.551743] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.183 [2024-12-16 21:29:27.709711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.183 [2024-12-16 21:29:27.709771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:38.183 [2024-12-16 21:29:27.709790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.183 [2024-12-16 21:29:27.709798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.183 [2024-12-16 21:29:27.709857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.183 [2024-12-16 21:29:27.709868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.183 [2024-12-16 21:29:27.709878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:38.183 [2024-12-16 21:29:27.709892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.183 [2024-12-16 21:29:27.709920] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:38.183 [2024-12-16 21:29:27.710181] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:38.183 [2024-12-16 21:29:27.710212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.183 [2024-12-16 21:29:27.710223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.183 [2024-12-16 21:29:27.710235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:38.183 [2024-12-16 21:29:27.710244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.183 [2024-12-16 21:29:27.712507] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:38.183 [2024-12-16 21:29:27.716228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.183 [2024-12-16 21:29:27.716281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:38.184 [2024-12-16 21:29:27.716293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:23:38.184 [2024-12-16 21:29:27.716309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.716380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.716390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:38.184 [2024-12-16 21:29:27.716401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:38.184 [2024-12-16 21:29:27.716409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.724214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.724266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.184 [2024-12-16 21:29:27.724288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.764 ms 00:23:38.184 [2024-12-16 21:29:27.724301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.724423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.724433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.184 [2024-12-16 21:29:27.724447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:38.184 [2024-12-16 21:29:27.724456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.724526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.724538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:38.184 [2024-12-16 21:29:27.724550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:38.184 [2024-12-16 21:29:27.724562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.724586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.184 [2024-12-16 21:29:27.726606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.726688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.184 [2024-12-16 21:29:27.726699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:23:38.184 [2024-12-16 21:29:27.726707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.726745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.726754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:38.184 [2024-12-16 21:29:27.726762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:38.184 [2024-12-16 21:29:27.726773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.726798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:38.184 [2024-12-16 21:29:27.726822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:38.184 [2024-12-16 21:29:27.726864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:38.184 [2024-12-16 21:29:27.726881] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:38.184 [2024-12-16 21:29:27.726989] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:38.184 [2024-12-16 21:29:27.727008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:38.184 [2024-12-16 21:29:27.727026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:38.184 [2024-12-16 21:29:27.727037] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727047] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727056] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:38.184 [2024-12-16 21:29:27.727065] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:38.184 [2024-12-16 21:29:27.727074] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:38.184 [2024-12-16 21:29:27.727081] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:38.184 [2024-12-16 21:29:27.727090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.727098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:38.184 [2024-12-16 21:29:27.727108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:38.184 [2024-12-16 21:29:27.727118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.727206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.184 [2024-12-16 21:29:27.727222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:38.184 [2024-12-16 21:29:27.727231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:38.184 [2024-12-16 21:29:27.727238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.184 [2024-12-16 21:29:27.727342] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:38.184 [2024-12-16 21:29:27.727356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:38.184 [2024-12-16 21:29:27.727365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:38.184 [2024-12-16 21:29:27.727394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:38.184 [2024-12-16 21:29:27.727420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.184 [2024-12-16 21:29:27.727437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:38.184 [2024-12-16 21:29:27.727448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:38.184 [2024-12-16 21:29:27.727456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.184 [2024-12-16 21:29:27.727465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:38.184 [2024-12-16 21:29:27.727474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:38.184 [2024-12-16 21:29:27.727482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:38.184 [2024-12-16 21:29:27.727500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:38.184 [2024-12-16 21:29:27.727524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:38.184 [2024-12-16 21:29:27.727547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:38.184 [2024-12-16 21:29:27.727570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:38.184 [2024-12-16 21:29:27.727601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:38.184 [2024-12-16 21:29:27.727639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.184 [2024-12-16 21:29:27.727656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:38.184 [2024-12-16 21:29:27.727664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:38.184 [2024-12-16 21:29:27.727672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.184 [2024-12-16 21:29:27.727679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:38.184 [2024-12-16 21:29:27.727686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:38.184 [2024-12-16 21:29:27.727692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:38.184 [2024-12-16 21:29:27.727705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:38.184 [2024-12-16 21:29:27.727712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727722] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:38.184 [2024-12-16 21:29:27.727732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:38.184 [2024-12-16 21:29:27.727739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.184 [2024-12-16 21:29:27.727762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:38.184 [2024-12-16 21:29:27.727773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:38.184 [2024-12-16 21:29:27.727781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:38.184 [2024-12-16 21:29:27.727790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:38.184 [2024-12-16 21:29:27.727798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:38.184 [2024-12-16 21:29:27.727806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:38.184 [2024-12-16 21:29:27.727818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:38.184 [2024-12-16 21:29:27.727829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.184 [2024-12-16 21:29:27.727838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:38.184 [2024-12-16 21:29:27.727847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:38.185 [2024-12-16 21:29:27.727857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:38.185 [2024-12-16 21:29:27.727867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:38.185 [2024-12-16 21:29:27.727878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:38.185 [2024-12-16 21:29:27.727886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:38.185 [2024-12-16 21:29:27.727895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:38.185 [2024-12-16 21:29:27.727903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:38.185 [2024-12-16 21:29:27.727910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:38.185 [2024-12-16 21:29:27.727923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:38.185 [2024-12-16 21:29:27.727931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:38.185 [2024-12-16 21:29:27.727939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:38.185 [2024-12-16 21:29:27.727946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:38.185 [2024-12-16 21:29:27.727954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:38.185 [2024-12-16 21:29:27.727961] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:38.185 [2024-12-16 21:29:27.727970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.185 [2024-12-16 21:29:27.727979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:38.185 [2024-12-16 21:29:27.727986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:38.185 [2024-12-16 21:29:27.727993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:38.185 [2024-12-16 21:29:27.728000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:38.185 [2024-12-16 21:29:27.728010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.728019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:38.185 [2024-12-16 21:29:27.728027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:23:38.185 [2024-12-16 21:29:27.728039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.741711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.741758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.185 [2024-12-16 21:29:27.741770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.629 ms 00:23:38.185 [2024-12-16 21:29:27.741778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.741865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.741874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:38.185 [2024-12-16 21:29:27.741882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:38.185 [2024-12-16 21:29:27.741896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.763977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.764053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.185 [2024-12-16 21:29:27.764074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.022 ms 00:23:38.185 [2024-12-16 21:29:27.764088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.764160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.764179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.185 [2024-12-16 21:29:27.764194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.185 [2024-12-16 21:29:27.764208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.764872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.764926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.185 [2024-12-16 21:29:27.764945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:23:38.185 [2024-12-16 21:29:27.764962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.765209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.765228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.185 [2024-12-16 21:29:27.765243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:23:38.185 [2024-12-16 21:29:27.765256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.773656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.773696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.185 [2024-12-16 21:29:27.773714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.337 ms 00:23:38.185 [2024-12-16 21:29:27.773722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.777461] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:38.185 [2024-12-16 21:29:27.777516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:38.185 [2024-12-16 21:29:27.777532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.777541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:38.185 [2024-12-16 21:29:27.777551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.718 ms 00:23:38.185 [2024-12-16 21:29:27.777559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.793574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.793652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:38.185 [2024-12-16 21:29:27.793665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.963 ms 00:23:38.185 [2024-12-16 21:29:27.793674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.796661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.796703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:38.185 [2024-12-16 21:29:27.796713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:23:38.185 [2024-12-16 21:29:27.796721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.799352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.799397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:38.185 [2024-12-16 21:29:27.799407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:23:38.185 [2024-12-16 21:29:27.799414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.799774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.799790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:38.185 [2024-12-16 21:29:27.799801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:23:38.185 [2024-12-16 21:29:27.799814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.825671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.825729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:38.185 [2024-12-16 21:29:27.825742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.833 ms 00:23:38.185 [2024-12-16 21:29:27.825750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.834020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:38.185 [2024-12-16 21:29:27.837373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.837416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:38.185 [2024-12-16 21:29:27.837428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.570 ms 00:23:38.185 [2024-12-16 21:29:27.837441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.837521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.837533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:38.185 [2024-12-16 21:29:27.837543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:38.185 [2024-12-16 21:29:27.837559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.839174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.839223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:38.185 [2024-12-16 21:29:27.839234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:23:38.185 [2024-12-16 21:29:27.839241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.839269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.839278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:38.185 [2024-12-16 21:29:27.839287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.185 [2024-12-16 21:29:27.839294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.839335] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:38.185 [2024-12-16 21:29:27.839346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.839355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:38.185 [2024-12-16 21:29:27.839367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:38.185 [2024-12-16 21:29:27.839377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.185 [2024-12-16 21:29:27.844998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.185 [2024-12-16 21:29:27.845058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:38.185 [2024-12-16 21:29:27.845073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.603 ms 00:23:38.185 [2024-12-16 21:29:27.845081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.186 [2024-12-16 21:29:27.845173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.186 [2024-12-16 21:29:27.845184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:38.186 [2024-12-16 21:29:27.845193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:38.186 [2024-12-16 21:29:27.845204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.186 [2024-12-16 21:29:27.846363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.190 ms, result 0 00:23:39.570  [2024-12-16T21:29:30.213Z] Copying: 6992/1048576 [kB] (6992 kBps) [2024-12-16T21:29:31.157Z] Copying: 17/1024 [MB] (11 MBps) [2024-12-16T21:29:32.100Z] Copying: 31/1024 [MB] (13 MBps) [2024-12-16T21:29:33.041Z] Copying: 42/1024 [MB] (10 MBps) [2024-12-16T21:29:34.427Z] Copying: 58/1024 [MB] (16 MBps) [2024-12-16T21:29:35.369Z] Copying: 75/1024 [MB] (16 MBps) [2024-12-16T21:29:36.309Z] Copying: 88/1024 [MB] (13 MBps) [2024-12-16T21:29:37.250Z] Copying: 104/1024 [MB] (15 MBps) [2024-12-16T21:29:38.219Z] Copying: 118/1024 [MB] (14 MBps) [2024-12-16T21:29:39.163Z] Copying: 131/1024 [MB] (12 MBps) [2024-12-16T21:29:40.105Z] Copying: 149/1024 [MB] (17 MBps) [2024-12-16T21:29:41.042Z] Copying: 159/1024 [MB] (10 MBps) [2024-12-16T21:29:42.426Z] Copying: 175/1024 [MB] (16 MBps) [2024-12-16T21:29:43.367Z] Copying: 186/1024 [MB] (10 MBps) [2024-12-16T21:29:44.309Z] Copying: 200/1024 [MB] (14 MBps) [2024-12-16T21:29:45.253Z] Copying: 217/1024 [MB] (16 MBps) [2024-12-16T21:29:46.191Z] Copying: 228/1024 [MB] (10 MBps) [2024-12-16T21:29:47.135Z] Copying: 244/1024 [MB] (16 MBps) [2024-12-16T21:29:48.078Z] Copying: 255/1024 [MB] (10 MBps) [2024-12-16T21:29:49.465Z] Copying: 275/1024 [MB] (20 MBps) [2024-12-16T21:29:50.036Z] Copying: 293/1024 [MB] (17 MBps) [2024-12-16T21:29:51.422Z] Copying: 304/1024 [MB] (10 MBps) [2024-12-16T21:29:52.366Z] Copying: 315/1024 [MB] (10 MBps) [2024-12-16T21:29:53.309Z] Copying: 325/1024 [MB] (10 MBps) [2024-12-16T21:29:54.249Z] Copying: 336/1024 [MB] (10 MBps) [2024-12-16T21:29:55.193Z] Copying: 347/1024 [MB] (11 MBps) [2024-12-16T21:29:56.132Z] Copying: 358/1024 [MB] (10 MBps) [2024-12-16T21:29:57.077Z] Copying: 371/1024 [MB] (13 MBps) [2024-12-16T21:29:58.452Z] Copying: 385/1024 [MB] (14 MBps) [2024-12-16T21:29:59.395Z] Copying: 400/1024 [MB] (14 MBps) [2024-12-16T21:30:00.336Z] Copying: 412/1024 [MB] (12 MBps) [2024-12-16T21:30:01.275Z] Copying: 424/1024 [MB] (11 MBps) [2024-12-16T21:30:02.211Z] Copying: 438/1024 [MB] (14 MBps) [2024-12-16T21:30:03.148Z] Copying: 450/1024 [MB] (11 MBps) [2024-12-16T21:30:04.082Z] Copying: 472/1024 [MB] (21 MBps) [2024-12-16T21:30:05.460Z] Copying: 482/1024 [MB] (10 MBps) [2024-12-16T21:30:06.396Z] Copying: 494/1024 [MB] (11 MBps) [2024-12-16T21:30:07.087Z] Copying: 506/1024 [MB] (12 MBps) [2024-12-16T21:30:08.028Z] Copying: 518/1024 [MB] (11 MBps) [2024-12-16T21:30:09.406Z] Copying: 533/1024 [MB] (15 MBps) [2024-12-16T21:30:10.346Z] Copying: 545/1024 [MB] (12 MBps) [2024-12-16T21:30:11.283Z] Copying: 559/1024 [MB] (13 MBps) [2024-12-16T21:30:12.226Z] Copying: 573/1024 [MB] (14 MBps) [2024-12-16T21:30:13.163Z] Copying: 594/1024 [MB] (20 MBps) [2024-12-16T21:30:14.104Z] Copying: 610/1024 [MB] (16 MBps) [2024-12-16T21:30:15.041Z] Copying: 622/1024 [MB] (12 MBps) [2024-12-16T21:30:16.417Z] Copying: 638/1024 [MB] (15 MBps) [2024-12-16T21:30:17.358Z] Copying: 653/1024 [MB] (14 MBps) [2024-12-16T21:30:18.296Z] Copying: 665/1024 [MB] (12 MBps) [2024-12-16T21:30:19.238Z] Copying: 681/1024 [MB] (15 MBps) [2024-12-16T21:30:20.174Z] Copying: 694/1024 [MB] (13 MBps) [2024-12-16T21:30:21.112Z] Copying: 706/1024 [MB] (11 MBps) [2024-12-16T21:30:22.052Z] Copying: 717/1024 [MB] (10 MBps) [2024-12-16T21:30:23.431Z] Copying: 727/1024 [MB] (10 MBps) [2024-12-16T21:30:24.366Z] Copying: 739/1024 [MB] (11 MBps) [2024-12-16T21:30:25.305Z] Copying: 751/1024 [MB] (11 MBps) [2024-12-16T21:30:26.241Z] Copying: 767/1024 [MB] (16 MBps) [2024-12-16T21:30:27.178Z] Copying: 778/1024 [MB] (11 MBps) [2024-12-16T21:30:28.117Z] Copying: 789/1024 [MB] (11 MBps) [2024-12-16T21:30:29.062Z] Copying: 801/1024 [MB] (11 MBps) [2024-12-16T21:30:30.440Z] Copying: 821/1024 [MB] (20 MBps) [2024-12-16T21:30:31.377Z] Copying: 834/1024 [MB] (12 MBps) [2024-12-16T21:30:32.315Z] Copying: 846/1024 [MB] (11 MBps) [2024-12-16T21:30:33.251Z] Copying: 861/1024 [MB] (15 MBps) [2024-12-16T21:30:34.185Z] Copying: 877/1024 [MB] (15 MBps) [2024-12-16T21:30:35.119Z] Copying: 893/1024 [MB] (15 MBps) [2024-12-16T21:30:36.127Z] Copying: 909/1024 [MB] (15 MBps) [2024-12-16T21:30:37.069Z] Copying: 922/1024 [MB] (13 MBps) [2024-12-16T21:30:38.450Z] Copying: 933/1024 [MB] (11 MBps) [2024-12-16T21:30:39.390Z] Copying: 946/1024 [MB] (12 MBps) [2024-12-16T21:30:40.332Z] Copying: 961/1024 [MB] (15 MBps) [2024-12-16T21:30:41.275Z] Copying: 974/1024 [MB] (12 MBps) [2024-12-16T21:30:42.216Z] Copying: 988/1024 [MB] (14 MBps) [2024-12-16T21:30:43.159Z] Copying: 1005/1024 [MB] (16 MBps) [2024-12-16T21:30:43.733Z] Copying: 1016/1024 [MB] (11 MBps) [2024-12-16T21:30:43.993Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 21:30:43.840982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.293 [2024-12-16 21:30:43.841086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:54.293 [2024-12-16 21:30:43.841115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:54.293 [2024-12-16 21:30:43.841135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.293 [2024-12-16 21:30:43.841219] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:54.293 [2024-12-16 21:30:43.843876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.293 [2024-12-16 21:30:43.844132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:54.293 [2024-12-16 21:30:43.844432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:24:54.293 [2024-12-16 21:30:43.844469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.293 [2024-12-16 21:30:43.845005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.293 [2024-12-16 21:30:43.845043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:54.293 [2024-12-16 21:30:43.845065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:24:54.293 [2024-12-16 21:30:43.845084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.293 [2024-12-16 21:30:43.855263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.293 [2024-12-16 21:30:43.855307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:54.293 [2024-12-16 21:30:43.855319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.146 ms 00:24:54.293 [2024-12-16 21:30:43.855327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.293 [2024-12-16 21:30:43.862175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.293 [2024-12-16 21:30:43.862216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:54.293 [2024-12-16 21:30:43.862229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:24:54.293 [2024-12-16 21:30:43.862237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.293 [2024-12-16 21:30:43.865110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.293 [2024-12-16 21:30:43.865159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:54.294 [2024-12-16 21:30:43.865181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:24:54.294 [2024-12-16 21:30:43.865190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.294 [2024-12-16 21:30:43.871209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.294 [2024-12-16 21:30:43.871259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:54.294 [2024-12-16 21:30:43.871270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.975 ms 00:24:54.294 [2024-12-16 21:30:43.871286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.555 [2024-12-16 21:30:44.247574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.555 [2024-12-16 21:30:44.247617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:54.555 [2024-12-16 21:30:44.247649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 376.238 ms 00:24:54.555 [2024-12-16 21:30:44.247668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.555 [2024-12-16 21:30:44.251312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.555 [2024-12-16 21:30:44.251357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:54.555 [2024-12-16 21:30:44.251368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:24:54.555 [2024-12-16 21:30:44.251376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.555 [2024-12-16 21:30:44.254317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.555 [2024-12-16 21:30:44.254359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:54.555 [2024-12-16 21:30:44.254369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.902 ms 00:24:54.555 [2024-12-16 21:30:44.254378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.817 [2024-12-16 21:30:44.256823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.817 [2024-12-16 21:30:44.256870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:54.817 [2024-12-16 21:30:44.256880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:24:54.817 [2024-12-16 21:30:44.256888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.817 [2024-12-16 21:30:44.259492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.817 [2024-12-16 21:30:44.259540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:54.817 [2024-12-16 21:30:44.259550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:24:54.817 [2024-12-16 21:30:44.259558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.817 [2024-12-16 21:30:44.259750] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:54.818 [2024-12-16 21:30:44.259790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:54.818 [2024-12-16 21:30:44.259806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.259995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:54.818 [2024-12-16 21:30:44.260552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:54.819 [2024-12-16 21:30:44.260655] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:54.819 [2024-12-16 21:30:44.260663] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59037ec0-dba9-4cf7-9db3-267d4c6f8c8f 00:24:54.819 [2024-12-16 21:30:44.260673] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:54.819 [2024-12-16 21:30:44.260684] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 44480 00:24:54.819 [2024-12-16 21:30:44.260695] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 43520 00:24:54.819 [2024-12-16 21:30:44.260705] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0221 00:24:54.819 [2024-12-16 21:30:44.260713] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:54.819 [2024-12-16 21:30:44.260723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:54.819 [2024-12-16 21:30:44.260731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:54.819 [2024-12-16 21:30:44.260738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:54.819 [2024-12-16 21:30:44.260745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:54.819 [2024-12-16 21:30:44.260752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.819 [2024-12-16 21:30:44.260761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:54.819 [2024-12-16 21:30:44.260770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:24:54.819 [2024-12-16 21:30:44.260778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.263068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.819 [2024-12-16 21:30:44.263106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:54.819 [2024-12-16 21:30:44.263118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:24:54.819 [2024-12-16 21:30:44.263129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.263251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.819 [2024-12-16 21:30:44.263261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:54.819 [2024-12-16 21:30:44.263271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:54.819 [2024-12-16 21:30:44.263286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.271095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.271147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.819 [2024-12-16 21:30:44.271158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.271166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.271228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.271239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.819 [2024-12-16 21:30:44.271247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.271256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.271304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.271316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.819 [2024-12-16 21:30:44.271325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.271333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.271349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.271358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.819 [2024-12-16 21:30:44.271366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.271374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.285758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.285810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.819 [2024-12-16 21:30:44.285822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.285830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.297213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.819 [2024-12-16 21:30:44.297225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.297234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.297311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.819 [2024-12-16 21:30:44.297320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.297329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.297377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.819 [2024-12-16 21:30:44.297385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.297393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.297481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.819 [2024-12-16 21:30:44.297490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.297498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.297544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:54.819 [2024-12-16 21:30:44.297552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.297561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.297617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.819 [2024-12-16 21:30:44.297645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.297654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.819 [2024-12-16 21:30:44.297717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.819 [2024-12-16 21:30:44.297727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.819 [2024-12-16 21:30:44.297740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.819 [2024-12-16 21:30:44.297886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.888 ms, result 0 00:24:55.079 00:24:55.079 00:24:55.079 21:30:44 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:56.981 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 89949 00:24:56.981 21:30:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89949 ']' 00:24:56.981 21:30:46 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89949 00:24:56.981 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89949) - No such process 00:24:56.981 Process with pid 89949 is not found 00:24:56.981 21:30:46 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 89949 is not found' 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:56.981 Remove shared memory files 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:56.981 21:30:46 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:56.981 00:24:56.981 real 5m8.666s 00:24:56.981 user 4m57.206s 00:24:56.981 sys 0m11.522s 00:24:56.981 21:30:46 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.981 ************************************ 00:24:56.981 END TEST ftl_restore 00:24:56.981 ************************************ 00:24:56.981 21:30:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:56.981 21:30:46 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:56.981 21:30:46 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:56.981 21:30:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.981 21:30:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:56.981 ************************************ 00:24:56.981 START TEST ftl_dirty_shutdown 00:24:56.981 ************************************ 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:56.981 * Looking for test storage... 00:24:56.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.981 --rc genhtml_branch_coverage=1 00:24:56.981 --rc genhtml_function_coverage=1 00:24:56.981 --rc genhtml_legend=1 00:24:56.981 --rc geninfo_all_blocks=1 00:24:56.981 --rc geninfo_unexecuted_blocks=1 00:24:56.981 00:24:56.981 ' 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.981 --rc genhtml_branch_coverage=1 00:24:56.981 --rc genhtml_function_coverage=1 00:24:56.981 --rc genhtml_legend=1 00:24:56.981 --rc geninfo_all_blocks=1 00:24:56.981 --rc geninfo_unexecuted_blocks=1 00:24:56.981 00:24:56.981 ' 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.981 --rc genhtml_branch_coverage=1 00:24:56.981 --rc genhtml_function_coverage=1 00:24:56.981 --rc genhtml_legend=1 00:24:56.981 --rc geninfo_all_blocks=1 00:24:56.981 --rc geninfo_unexecuted_blocks=1 00:24:56.981 00:24:56.981 ' 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.981 --rc genhtml_branch_coverage=1 00:24:56.981 --rc genhtml_function_coverage=1 00:24:56.981 --rc genhtml_legend=1 00:24:56.981 --rc geninfo_all_blocks=1 00:24:56.981 --rc geninfo_unexecuted_blocks=1 00:24:56.981 00:24:56.981 ' 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:56.981 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=93174 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 93174 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93174 ']' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.982 21:30:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:56.982 [2024-12-16 21:30:46.630276] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:56.982 [2024-12-16 21:30:46.630384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93174 ] 00:24:57.244 [2024-12-16 21:30:46.773828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.244 [2024-12-16 21:30:46.802519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:57.815 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:58.387 21:30:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:58.387 { 00:24:58.387 "name": "nvme0n1", 00:24:58.387 "aliases": [ 00:24:58.387 "c83a542d-d29b-4c05-a9b1-ce6b938a99c3" 00:24:58.387 ], 00:24:58.387 "product_name": "NVMe disk", 00:24:58.387 "block_size": 4096, 00:24:58.387 "num_blocks": 1310720, 00:24:58.387 "uuid": "c83a542d-d29b-4c05-a9b1-ce6b938a99c3", 00:24:58.387 "numa_id": -1, 00:24:58.387 "assigned_rate_limits": { 00:24:58.387 "rw_ios_per_sec": 0, 00:24:58.387 "rw_mbytes_per_sec": 0, 00:24:58.387 "r_mbytes_per_sec": 0, 00:24:58.387 "w_mbytes_per_sec": 0 00:24:58.387 }, 00:24:58.387 "claimed": true, 00:24:58.387 "claim_type": "read_many_write_one", 00:24:58.387 "zoned": false, 00:24:58.387 "supported_io_types": { 00:24:58.387 "read": true, 00:24:58.387 "write": true, 00:24:58.387 "unmap": true, 00:24:58.387 "flush": true, 00:24:58.387 "reset": true, 00:24:58.387 "nvme_admin": true, 00:24:58.387 "nvme_io": true, 00:24:58.387 "nvme_io_md": false, 00:24:58.387 "write_zeroes": true, 00:24:58.387 "zcopy": false, 00:24:58.387 "get_zone_info": false, 00:24:58.387 "zone_management": false, 00:24:58.387 "zone_append": false, 00:24:58.387 "compare": true, 00:24:58.387 "compare_and_write": false, 00:24:58.387 "abort": true, 00:24:58.387 "seek_hole": false, 00:24:58.387 "seek_data": false, 00:24:58.387 "copy": true, 00:24:58.387 "nvme_iov_md": false 00:24:58.387 }, 00:24:58.387 "driver_specific": { 00:24:58.387 "nvme": [ 00:24:58.387 { 00:24:58.387 "pci_address": "0000:00:11.0", 00:24:58.387 "trid": { 00:24:58.387 "trtype": "PCIe", 00:24:58.387 "traddr": "0000:00:11.0" 00:24:58.387 }, 00:24:58.387 "ctrlr_data": { 00:24:58.387 "cntlid": 0, 00:24:58.387 "vendor_id": "0x1b36", 00:24:58.387 "model_number": "QEMU NVMe Ctrl", 00:24:58.387 "serial_number": "12341", 00:24:58.387 "firmware_revision": "8.0.0", 00:24:58.387 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:58.387 "oacs": { 00:24:58.387 "security": 0, 00:24:58.387 "format": 1, 00:24:58.387 "firmware": 0, 00:24:58.387 "ns_manage": 1 00:24:58.387 }, 00:24:58.387 "multi_ctrlr": false, 00:24:58.387 "ana_reporting": false 00:24:58.387 }, 00:24:58.387 "vs": { 00:24:58.387 "nvme_version": "1.4" 00:24:58.387 }, 00:24:58.387 "ns_data": { 00:24:58.387 "id": 1, 00:24:58.387 "can_share": false 00:24:58.387 } 00:24:58.387 } 00:24:58.387 ], 00:24:58.387 "mp_policy": "active_passive" 00:24:58.387 } 00:24:58.387 } 00:24:58.387 ]' 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:58.387 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:58.649 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=dd28b129-35b7-42b6-9446-88f33ccfd6f3 00:24:58.649 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:58.649 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd28b129-35b7-42b6-9446-88f33ccfd6f3 00:24:58.910 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:59.171 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=81ed7b19-2d4d-4d7a-875e-b954b5314518 00:24:59.171 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 81ed7b19-2d4d-4d7a-875e-b954b5314518 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:59.431 21:30:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:59.691 { 00:24:59.691 "name": "b70201ea-3620-4654-bf9a-a92b63c0e65b", 00:24:59.691 "aliases": [ 00:24:59.691 "lvs/nvme0n1p0" 00:24:59.691 ], 00:24:59.691 "product_name": "Logical Volume", 00:24:59.691 "block_size": 4096, 00:24:59.691 "num_blocks": 26476544, 00:24:59.691 "uuid": "b70201ea-3620-4654-bf9a-a92b63c0e65b", 00:24:59.691 "assigned_rate_limits": { 00:24:59.691 "rw_ios_per_sec": 0, 00:24:59.691 "rw_mbytes_per_sec": 0, 00:24:59.691 "r_mbytes_per_sec": 0, 00:24:59.691 "w_mbytes_per_sec": 0 00:24:59.691 }, 00:24:59.691 "claimed": false, 00:24:59.691 "zoned": false, 00:24:59.691 "supported_io_types": { 00:24:59.691 "read": true, 00:24:59.691 "write": true, 00:24:59.691 "unmap": true, 00:24:59.691 "flush": false, 00:24:59.691 "reset": true, 00:24:59.691 "nvme_admin": false, 00:24:59.691 "nvme_io": false, 00:24:59.691 "nvme_io_md": false, 00:24:59.691 "write_zeroes": true, 00:24:59.691 "zcopy": false, 00:24:59.691 "get_zone_info": false, 00:24:59.691 "zone_management": false, 00:24:59.691 "zone_append": false, 00:24:59.691 "compare": false, 00:24:59.691 "compare_and_write": false, 00:24:59.691 "abort": false, 00:24:59.691 "seek_hole": true, 00:24:59.691 "seek_data": true, 00:24:59.691 "copy": false, 00:24:59.691 "nvme_iov_md": false 00:24:59.691 }, 00:24:59.691 "driver_specific": { 00:24:59.691 "lvol": { 00:24:59.691 "lvol_store_uuid": "81ed7b19-2d4d-4d7a-875e-b954b5314518", 00:24:59.691 "base_bdev": "nvme0n1", 00:24:59.691 "thin_provision": true, 00:24:59.691 "num_allocated_clusters": 0, 00:24:59.691 "snapshot": false, 00:24:59.691 "clone": false, 00:24:59.691 "esnap_clone": false 00:24:59.691 } 00:24:59.691 } 00:24:59.691 } 00:24:59.691 ]' 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:59.691 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b70201ea-3620-4654-bf9a-a92b63c0e65b 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:59.950 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b70201ea-3620-4654-bf9a-a92b63c0e65b 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:00.209 { 00:25:00.209 "name": "b70201ea-3620-4654-bf9a-a92b63c0e65b", 00:25:00.209 "aliases": [ 00:25:00.209 "lvs/nvme0n1p0" 00:25:00.209 ], 00:25:00.209 "product_name": "Logical Volume", 00:25:00.209 "block_size": 4096, 00:25:00.209 "num_blocks": 26476544, 00:25:00.209 "uuid": "b70201ea-3620-4654-bf9a-a92b63c0e65b", 00:25:00.209 "assigned_rate_limits": { 00:25:00.209 "rw_ios_per_sec": 0, 00:25:00.209 "rw_mbytes_per_sec": 0, 00:25:00.209 "r_mbytes_per_sec": 0, 00:25:00.209 "w_mbytes_per_sec": 0 00:25:00.209 }, 00:25:00.209 "claimed": false, 00:25:00.209 "zoned": false, 00:25:00.209 "supported_io_types": { 00:25:00.209 "read": true, 00:25:00.209 "write": true, 00:25:00.209 "unmap": true, 00:25:00.209 "flush": false, 00:25:00.209 "reset": true, 00:25:00.209 "nvme_admin": false, 00:25:00.209 "nvme_io": false, 00:25:00.209 "nvme_io_md": false, 00:25:00.209 "write_zeroes": true, 00:25:00.209 "zcopy": false, 00:25:00.209 "get_zone_info": false, 00:25:00.209 "zone_management": false, 00:25:00.209 "zone_append": false, 00:25:00.209 "compare": false, 00:25:00.209 "compare_and_write": false, 00:25:00.209 "abort": false, 00:25:00.209 "seek_hole": true, 00:25:00.209 "seek_data": true, 00:25:00.209 "copy": false, 00:25:00.209 "nvme_iov_md": false 00:25:00.209 }, 00:25:00.209 "driver_specific": { 00:25:00.209 "lvol": { 00:25:00.209 "lvol_store_uuid": "81ed7b19-2d4d-4d7a-875e-b954b5314518", 00:25:00.209 "base_bdev": "nvme0n1", 00:25:00.209 "thin_provision": true, 00:25:00.209 "num_allocated_clusters": 0, 00:25:00.209 "snapshot": false, 00:25:00.209 "clone": false, 00:25:00.209 "esnap_clone": false 00:25:00.209 } 00:25:00.209 } 00:25:00.209 } 00:25:00.209 ]' 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:00.209 21:30:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:00.468 21:30:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:00.468 21:30:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b70201ea-3620-4654-bf9a-a92b63c0e65b 00:25:00.468 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b70201ea-3620-4654-bf9a-a92b63c0e65b 00:25:00.468 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:00.468 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:00.468 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:00.468 21:30:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b70201ea-3620-4654-bf9a-a92b63c0e65b 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:00.727 { 00:25:00.727 "name": "b70201ea-3620-4654-bf9a-a92b63c0e65b", 00:25:00.727 "aliases": [ 00:25:00.727 "lvs/nvme0n1p0" 00:25:00.727 ], 00:25:00.727 "product_name": "Logical Volume", 00:25:00.727 "block_size": 4096, 00:25:00.727 "num_blocks": 26476544, 00:25:00.727 "uuid": "b70201ea-3620-4654-bf9a-a92b63c0e65b", 00:25:00.727 "assigned_rate_limits": { 00:25:00.727 "rw_ios_per_sec": 0, 00:25:00.727 "rw_mbytes_per_sec": 0, 00:25:00.727 "r_mbytes_per_sec": 0, 00:25:00.727 "w_mbytes_per_sec": 0 00:25:00.727 }, 00:25:00.727 "claimed": false, 00:25:00.727 "zoned": false, 00:25:00.727 "supported_io_types": { 00:25:00.727 "read": true, 00:25:00.727 "write": true, 00:25:00.727 "unmap": true, 00:25:00.727 "flush": false, 00:25:00.727 "reset": true, 00:25:00.727 "nvme_admin": false, 00:25:00.727 "nvme_io": false, 00:25:00.727 "nvme_io_md": false, 00:25:00.727 "write_zeroes": true, 00:25:00.727 "zcopy": false, 00:25:00.727 "get_zone_info": false, 00:25:00.727 "zone_management": false, 00:25:00.727 "zone_append": false, 00:25:00.727 "compare": false, 00:25:00.727 "compare_and_write": false, 00:25:00.727 "abort": false, 00:25:00.727 "seek_hole": true, 00:25:00.727 "seek_data": true, 00:25:00.727 "copy": false, 00:25:00.727 "nvme_iov_md": false 00:25:00.727 }, 00:25:00.727 "driver_specific": { 00:25:00.727 "lvol": { 00:25:00.727 "lvol_store_uuid": "81ed7b19-2d4d-4d7a-875e-b954b5314518", 00:25:00.727 "base_bdev": "nvme0n1", 00:25:00.727 "thin_provision": true, 00:25:00.727 "num_allocated_clusters": 0, 00:25:00.727 "snapshot": false, 00:25:00.727 "clone": false, 00:25:00.727 "esnap_clone": false 00:25:00.727 } 00:25:00.727 } 00:25:00.727 } 00:25:00.727 ]' 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b70201ea-3620-4654-bf9a-a92b63c0e65b --l2p_dram_limit 10' 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:00.727 21:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b70201ea-3620-4654-bf9a-a92b63c0e65b --l2p_dram_limit 10 -c nvc0n1p0 00:25:00.987 [2024-12-16 21:30:50.442692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.442725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:00.987 [2024-12-16 21:30:50.442735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.987 [2024-12-16 21:30:50.442743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.442787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.442797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.987 [2024-12-16 21:30:50.442804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:00.987 [2024-12-16 21:30:50.442813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.442830] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:00.987 [2024-12-16 21:30:50.443052] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:00.987 [2024-12-16 21:30:50.443064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.443072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.987 [2024-12-16 21:30:50.443079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:25:00.987 [2024-12-16 21:30:50.443086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.443111] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 04a3e9c6-0984-41f3-a8a4-585bb9630ddb 00:25:00.987 [2024-12-16 21:30:50.444034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.444051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:00.987 [2024-12-16 21:30:50.444062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:00.987 [2024-12-16 21:30:50.444068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.448690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.448716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.987 [2024-12-16 21:30:50.448725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.579 ms 00:25:00.987 [2024-12-16 21:30:50.448732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.448831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.448839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.987 [2024-12-16 21:30:50.448847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:00.987 [2024-12-16 21:30:50.448853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.448901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.448913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:00.987 [2024-12-16 21:30:50.448920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:00.987 [2024-12-16 21:30:50.448925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.448944] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:00.987 [2024-12-16 21:30:50.450217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.450239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.987 [2024-12-16 21:30:50.450246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:25:00.987 [2024-12-16 21:30:50.450255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.450282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.450290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:00.987 [2024-12-16 21:30:50.450297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:00.987 [2024-12-16 21:30:50.450308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.450325] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:00.987 [2024-12-16 21:30:50.450439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:00.987 [2024-12-16 21:30:50.450452] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:00.987 [2024-12-16 21:30:50.450462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:00.987 [2024-12-16 21:30:50.450470] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:00.987 [2024-12-16 21:30:50.450481] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:00.987 [2024-12-16 21:30:50.450487] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:00.987 [2024-12-16 21:30:50.450496] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:00.987 [2024-12-16 21:30:50.450501] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:00.987 [2024-12-16 21:30:50.450508] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:00.987 [2024-12-16 21:30:50.450676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.450686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:00.987 [2024-12-16 21:30:50.450692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:25:00.987 [2024-12-16 21:30:50.450699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.450764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.987 [2024-12-16 21:30:50.450775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:00.987 [2024-12-16 21:30:50.450781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:00.987 [2024-12-16 21:30:50.450790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.987 [2024-12-16 21:30:50.450863] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:00.987 [2024-12-16 21:30:50.450874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:00.987 [2024-12-16 21:30:50.450881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.987 [2024-12-16 21:30:50.450889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.987 [2024-12-16 21:30:50.450895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:00.987 [2024-12-16 21:30:50.450902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:00.987 [2024-12-16 21:30:50.450909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:00.987 [2024-12-16 21:30:50.450917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:00.987 [2024-12-16 21:30:50.450923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:00.987 [2024-12-16 21:30:50.450930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.987 [2024-12-16 21:30:50.450936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:00.987 [2024-12-16 21:30:50.450943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:00.987 [2024-12-16 21:30:50.450951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.987 [2024-12-16 21:30:50.450959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:00.987 [2024-12-16 21:30:50.450967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:00.987 [2024-12-16 21:30:50.450974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.987 [2024-12-16 21:30:50.450980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:00.987 [2024-12-16 21:30:50.450987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:00.987 [2024-12-16 21:30:50.450993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.987 [2024-12-16 21:30:50.451000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:00.987 [2024-12-16 21:30:50.451006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:00.987 [2024-12-16 21:30:50.451013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.987 [2024-12-16 21:30:50.451018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:00.987 [2024-12-16 21:30:50.451025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:00.987 [2024-12-16 21:30:50.451031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.987 [2024-12-16 21:30:50.451038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:00.987 [2024-12-16 21:30:50.451044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:00.987 [2024-12-16 21:30:50.451052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.987 [2024-12-16 21:30:50.451057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:00.987 [2024-12-16 21:30:50.451066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:00.988 [2024-12-16 21:30:50.451072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:00.988 [2024-12-16 21:30:50.451079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:00.988 [2024-12-16 21:30:50.451085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:00.988 [2024-12-16 21:30:50.451092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.988 [2024-12-16 21:30:50.451098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:00.988 [2024-12-16 21:30:50.451105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:00.988 [2024-12-16 21:30:50.451110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.988 [2024-12-16 21:30:50.451117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:00.988 [2024-12-16 21:30:50.451122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:00.988 [2024-12-16 21:30:50.451129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.988 [2024-12-16 21:30:50.451135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:00.988 [2024-12-16 21:30:50.451141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:00.988 [2024-12-16 21:30:50.451147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.988 [2024-12-16 21:30:50.451154] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:00.988 [2024-12-16 21:30:50.451166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:00.988 [2024-12-16 21:30:50.451175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.988 [2024-12-16 21:30:50.451181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.988 [2024-12-16 21:30:50.451189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:00.988 [2024-12-16 21:30:50.451194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:00.988 [2024-12-16 21:30:50.451203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:00.988 [2024-12-16 21:30:50.451209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:00.988 [2024-12-16 21:30:50.451217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:00.988 [2024-12-16 21:30:50.451223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:00.988 [2024-12-16 21:30:50.451231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:00.988 [2024-12-16 21:30:50.451240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.988 [2024-12-16 21:30:50.451249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:00.988 [2024-12-16 21:30:50.451255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:00.988 [2024-12-16 21:30:50.451261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:00.988 [2024-12-16 21:30:50.451267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:00.988 [2024-12-16 21:30:50.451273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:00.988 [2024-12-16 21:30:50.451279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:00.988 [2024-12-16 21:30:50.451287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:00.988 [2024-12-16 21:30:50.451292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:00.988 [2024-12-16 21:30:50.451298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:00.988 [2024-12-16 21:30:50.451304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:00.988 [2024-12-16 21:30:50.451311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:00.988 [2024-12-16 21:30:50.451316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:00.988 [2024-12-16 21:30:50.451322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:00.988 [2024-12-16 21:30:50.451328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:00.988 [2024-12-16 21:30:50.451334] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:00.988 [2024-12-16 21:30:50.451340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.988 [2024-12-16 21:30:50.451347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:00.988 [2024-12-16 21:30:50.451352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:00.988 [2024-12-16 21:30:50.451359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:00.988 [2024-12-16 21:30:50.451364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:00.988 [2024-12-16 21:30:50.451371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.988 [2024-12-16 21:30:50.451380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:00.988 [2024-12-16 21:30:50.451388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:25:00.988 [2024-12-16 21:30:50.451393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.988 [2024-12-16 21:30:50.451421] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:00.988 [2024-12-16 21:30:50.451429] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:04.284 [2024-12-16 21:30:53.650473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.650527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:04.284 [2024-12-16 21:30:53.650545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3199.033 ms 00:25:04.284 [2024-12-16 21:30:53.650554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.661272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.661311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:04.284 [2024-12-16 21:30:53.661326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.610 ms 00:25:04.284 [2024-12-16 21:30:53.661335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.661431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.661440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:04.284 [2024-12-16 21:30:53.661452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:04.284 [2024-12-16 21:30:53.661460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.671472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.671509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:04.284 [2024-12-16 21:30:53.671522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.960 ms 00:25:04.284 [2024-12-16 21:30:53.671537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.671569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.671577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:04.284 [2024-12-16 21:30:53.671588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:04.284 [2024-12-16 21:30:53.671595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.672015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.672036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:04.284 [2024-12-16 21:30:53.672047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:25:04.284 [2024-12-16 21:30:53.672056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.672181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.672193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:04.284 [2024-12-16 21:30:53.672204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:04.284 [2024-12-16 21:30:53.672213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.678784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.678814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:04.284 [2024-12-16 21:30:53.678827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:25:04.284 [2024-12-16 21:30:53.678835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.697118] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:04.284 [2024-12-16 21:30:53.700392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.700435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:04.284 [2024-12-16 21:30:53.700451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.496 ms 00:25:04.284 [2024-12-16 21:30:53.700462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.791417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.791480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:04.284 [2024-12-16 21:30:53.791499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.911 ms 00:25:04.284 [2024-12-16 21:30:53.791513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.791754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.791775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:04.284 [2024-12-16 21:30:53.791786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:25:04.284 [2024-12-16 21:30:53.791796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.798448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.798507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:04.284 [2024-12-16 21:30:53.798523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.612 ms 00:25:04.284 [2024-12-16 21:30:53.798535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.804399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.804454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:04.284 [2024-12-16 21:30:53.804465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.810 ms 00:25:04.284 [2024-12-16 21:30:53.804476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.804891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.804907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:04.284 [2024-12-16 21:30:53.804917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:25:04.284 [2024-12-16 21:30:53.804930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.856548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.856604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:04.284 [2024-12-16 21:30:53.856620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.594 ms 00:25:04.284 [2024-12-16 21:30:53.856649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.864720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.864773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:04.284 [2024-12-16 21:30:53.864785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.981 ms 00:25:04.284 [2024-12-16 21:30:53.864796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.871766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.871819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:04.284 [2024-12-16 21:30:53.871830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:25:04.284 [2024-12-16 21:30:53.871841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.878948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.879003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:04.284 [2024-12-16 21:30:53.879014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.054 ms 00:25:04.284 [2024-12-16 21:30:53.879029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.879087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.879106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:04.284 [2024-12-16 21:30:53.879116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:04.284 [2024-12-16 21:30:53.879127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.879201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.284 [2024-12-16 21:30:53.879216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:04.284 [2024-12-16 21:30:53.879226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:04.284 [2024-12-16 21:30:53.879240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.284 [2024-12-16 21:30:53.880445] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3437.241 ms, result 0 00:25:04.284 { 00:25:04.284 "name": "ftl0", 00:25:04.284 "uuid": "04a3e9c6-0984-41f3-a8a4-585bb9630ddb" 00:25:04.284 } 00:25:04.284 21:30:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:04.284 21:30:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:04.545 21:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:04.545 21:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:04.545 21:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:04.806 /dev/nbd0 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:04.806 1+0 records in 00:25:04.806 1+0 records out 00:25:04.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043007 s, 9.5 MB/s 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:04.806 21:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:04.806 [2024-12-16 21:30:54.459847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:04.806 [2024-12-16 21:30:54.460005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93310 ] 00:25:05.066 [2024-12-16 21:30:54.606079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.066 [2024-12-16 21:30:54.633811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.010  [2024-12-16T21:30:57.097Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-16T21:30:58.034Z] Copying: 379/1024 [MB] (190 MBps) [2024-12-16T21:30:58.970Z] Copying: 584/1024 [MB] (205 MBps) [2024-12-16T21:30:59.905Z] Copying: 795/1024 [MB] (210 MBps) [2024-12-16T21:30:59.905Z] Copying: 1009/1024 [MB] (214 MBps) [2024-12-16T21:31:00.164Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:25:10.464 00:25:10.464 21:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:12.366 21:31:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:12.366 [2024-12-16 21:31:01.946798] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:12.366 [2024-12-16 21:31:01.946881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93399 ] 00:25:12.625 [2024-12-16 21:31:02.084710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.625 [2024-12-16 21:31:02.107423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.560  [2024-12-16T21:31:04.209Z] Copying: 35/1024 [MB] (35 MBps) [2024-12-16T21:31:05.214Z] Copying: 65/1024 [MB] (30 MBps) [2024-12-16T21:31:06.597Z] Copying: 85/1024 [MB] (19 MBps) [2024-12-16T21:31:07.169Z] Copying: 107/1024 [MB] (22 MBps) [2024-12-16T21:31:08.550Z] Copying: 127/1024 [MB] (19 MBps) [2024-12-16T21:31:09.494Z] Copying: 150/1024 [MB] (23 MBps) [2024-12-16T21:31:10.435Z] Copying: 178/1024 [MB] (28 MBps) [2024-12-16T21:31:11.375Z] Copying: 202/1024 [MB] (23 MBps) [2024-12-16T21:31:12.314Z] Copying: 225/1024 [MB] (22 MBps) [2024-12-16T21:31:13.256Z] Copying: 250/1024 [MB] (25 MBps) [2024-12-16T21:31:14.196Z] Copying: 276/1024 [MB] (25 MBps) [2024-12-16T21:31:15.579Z] Copying: 303/1024 [MB] (27 MBps) [2024-12-16T21:31:16.514Z] Copying: 328/1024 [MB] (24 MBps) [2024-12-16T21:31:17.456Z] Copying: 356/1024 [MB] (27 MBps) [2024-12-16T21:31:18.391Z] Copying: 378/1024 [MB] (21 MBps) [2024-12-16T21:31:19.333Z] Copying: 408/1024 [MB] (30 MBps) [2024-12-16T21:31:20.274Z] Copying: 435/1024 [MB] (26 MBps) [2024-12-16T21:31:21.216Z] Copying: 457/1024 [MB] (21 MBps) [2024-12-16T21:31:22.594Z] Copying: 481/1024 [MB] (24 MBps) [2024-12-16T21:31:23.174Z] Copying: 505/1024 [MB] (23 MBps) [2024-12-16T21:31:24.557Z] Copying: 533/1024 [MB] (27 MBps) [2024-12-16T21:31:25.499Z] Copying: 560/1024 [MB] (27 MBps) [2024-12-16T21:31:26.438Z] Copying: 584/1024 [MB] (24 MBps) [2024-12-16T21:31:27.375Z] Copying: 616/1024 [MB] (31 MBps) [2024-12-16T21:31:28.318Z] Copying: 648/1024 [MB] (32 MBps) [2024-12-16T21:31:29.254Z] Copying: 675/1024 [MB] (26 MBps) [2024-12-16T21:31:30.197Z] Copying: 705/1024 [MB] (30 MBps) [2024-12-16T21:31:31.581Z] Copying: 727/1024 [MB] (21 MBps) [2024-12-16T21:31:32.521Z] Copying: 751/1024 [MB] (24 MBps) [2024-12-16T21:31:33.509Z] Copying: 775/1024 [MB] (23 MBps) [2024-12-16T21:31:34.468Z] Copying: 805/1024 [MB] (29 MBps) [2024-12-16T21:31:35.408Z] Copying: 834/1024 [MB] (29 MBps) [2024-12-16T21:31:36.346Z] Copying: 864/1024 [MB] (29 MBps) [2024-12-16T21:31:37.284Z] Copying: 892/1024 [MB] (27 MBps) [2024-12-16T21:31:38.221Z] Copying: 922/1024 [MB] (29 MBps) [2024-12-16T21:31:39.607Z] Copying: 954/1024 [MB] (32 MBps) [2024-12-16T21:31:40.175Z] Copying: 982/1024 [MB] (28 MBps) [2024-12-16T21:31:40.744Z] Copying: 1010/1024 [MB] (28 MBps) [2024-12-16T21:31:41.002Z] Copying: 1024/1024 [MB] (average 26 MBps) 00:25:51.302 00:25:51.302 21:31:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:51.302 21:31:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:51.562 21:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:51.562 [2024-12-16 21:31:41.213346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.213380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:51.562 [2024-12-16 21:31:41.213392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:51.562 [2024-12-16 21:31:41.213399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.213419] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:51.562 [2024-12-16 21:31:41.213804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.213825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:51.562 [2024-12-16 21:31:41.213833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:25:51.562 [2024-12-16 21:31:41.213840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.215826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.215853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:51.562 [2024-12-16 21:31:41.215861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:25:51.562 [2024-12-16 21:31:41.215871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.232175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.232201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:51.562 [2024-12-16 21:31:41.232210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.291 ms 00:25:51.562 [2024-12-16 21:31:41.232218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.236834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.236854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:51.562 [2024-12-16 21:31:41.236861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:25:51.562 [2024-12-16 21:31:41.236868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.238272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.238300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:51.562 [2024-12-16 21:31:41.238307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:25:51.562 [2024-12-16 21:31:41.238314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.242816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.242845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:51.562 [2024-12-16 21:31:41.242852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.478 ms 00:25:51.562 [2024-12-16 21:31:41.242859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.242953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.242962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:51.562 [2024-12-16 21:31:41.242969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:51.562 [2024-12-16 21:31:41.242980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.244737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.244761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:51.562 [2024-12-16 21:31:41.244767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:25:51.562 [2024-12-16 21:31:41.244774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.246201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.246227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:51.562 [2024-12-16 21:31:41.246234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:25:51.562 [2024-12-16 21:31:41.246241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.247360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.247384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:51.562 [2024-12-16 21:31:41.247391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:25:51.562 [2024-12-16 21:31:41.247398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.248471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.562 [2024-12-16 21:31:41.248497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:51.562 [2024-12-16 21:31:41.248503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:25:51.562 [2024-12-16 21:31:41.248511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.562 [2024-12-16 21:31:41.248535] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:51.562 [2024-12-16 21:31:41.248547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:51.562 [2024-12-16 21:31:41.248557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:51.562 [2024-12-16 21:31:41.248564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:51.562 [2024-12-16 21:31:41.248570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:51.562 [2024-12-16 21:31:41.248579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:51.562 [2024-12-16 21:31:41.248584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:51.562 [2024-12-16 21:31:41.248592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:51.562 [2024-12-16 21:31:41.248597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.248995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:51.563 [2024-12-16 21:31:41.249231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:51.564 [2024-12-16 21:31:41.249237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:51.564 [2024-12-16 21:31:41.249251] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:51.564 [2024-12-16 21:31:41.249256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04a3e9c6-0984-41f3-a8a4-585bb9630ddb 00:25:51.564 [2024-12-16 21:31:41.249265] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:51.564 [2024-12-16 21:31:41.249271] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:51.564 [2024-12-16 21:31:41.249278] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:51.564 [2024-12-16 21:31:41.249283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:51.564 [2024-12-16 21:31:41.249290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:51.564 [2024-12-16 21:31:41.249296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:51.564 [2024-12-16 21:31:41.249307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:51.564 [2024-12-16 21:31:41.249311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:51.564 [2024-12-16 21:31:41.249320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:51.564 [2024-12-16 21:31:41.249325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.564 [2024-12-16 21:31:41.249332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:51.564 [2024-12-16 21:31:41.249339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:25:51.564 [2024-12-16 21:31:41.249348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.564 [2024-12-16 21:31:41.250577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.564 [2024-12-16 21:31:41.250596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:51.564 [2024-12-16 21:31:41.250603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:25:51.564 [2024-12-16 21:31:41.250610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.564 [2024-12-16 21:31:41.250695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.564 [2024-12-16 21:31:41.250707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:51.564 [2024-12-16 21:31:41.250714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:51.564 [2024-12-16 21:31:41.250721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.564 [2024-12-16 21:31:41.255077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.564 [2024-12-16 21:31:41.255106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.564 [2024-12-16 21:31:41.255114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.564 [2024-12-16 21:31:41.255121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.564 [2024-12-16 21:31:41.255163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.564 [2024-12-16 21:31:41.255173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.564 [2024-12-16 21:31:41.255179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.564 [2024-12-16 21:31:41.255188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.564 [2024-12-16 21:31:41.255239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.564 [2024-12-16 21:31:41.255250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.564 [2024-12-16 21:31:41.255256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.564 [2024-12-16 21:31:41.255263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.564 [2024-12-16 21:31:41.255276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.564 [2024-12-16 21:31:41.255283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.564 [2024-12-16 21:31:41.255289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.564 [2024-12-16 21:31:41.255299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.263175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.263206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:51.823 [2024-12-16 21:31:41.263213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.263221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.269658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.269688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:51.823 [2024-12-16 21:31:41.269698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.269705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.269772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.269784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:51.823 [2024-12-16 21:31:41.269790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.269797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.269836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.269847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:51.823 [2024-12-16 21:31:41.269853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.269860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.269910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.269925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:51.823 [2024-12-16 21:31:41.269931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.269938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.269960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.269968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:51.823 [2024-12-16 21:31:41.269974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.269983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.270013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.270023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:51.823 [2024-12-16 21:31:41.270029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.270037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.270070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.823 [2024-12-16 21:31:41.270080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:51.823 [2024-12-16 21:31:41.270086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.823 [2024-12-16 21:31:41.270092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.823 [2024-12-16 21:31:41.270193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.824 ms, result 0 00:25:51.823 true 00:25:51.823 21:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 93174 00:25:51.823 21:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid93174 00:25:51.823 21:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:51.823 [2024-12-16 21:31:41.363382] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:51.823 [2024-12-16 21:31:41.363501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93809 ] 00:25:51.823 [2024-12-16 21:31:41.506451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.081 [2024-12-16 21:31:41.525351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.017  [2024-12-16T21:31:43.652Z] Copying: 261/1024 [MB] (261 MBps) [2024-12-16T21:31:44.588Z] Copying: 522/1024 [MB] (260 MBps) [2024-12-16T21:31:45.962Z] Copying: 780/1024 [MB] (258 MBps) [2024-12-16T21:31:45.962Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:25:56.262 00:25:56.262 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 93174 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:56.262 21:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.262 [2024-12-16 21:31:45.730880] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:56.262 [2024-12-16 21:31:45.730998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93856 ] 00:25:56.262 [2024-12-16 21:31:45.874774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.262 [2024-12-16 21:31:45.891485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.521 [2024-12-16 21:31:45.973481] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.521 [2024-12-16 21:31:45.973535] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.521 [2024-12-16 21:31:46.035778] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:56.521 [2024-12-16 21:31:46.036384] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:56.521 [2024-12-16 21:31:46.036953] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:57.091 [2024-12-16 21:31:46.484187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.484220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.092 [2024-12-16 21:31:46.484229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.092 [2024-12-16 21:31:46.484240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.484277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.484285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.092 [2024-12-16 21:31:46.484291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:57.092 [2024-12-16 21:31:46.484297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.484311] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.092 [2024-12-16 21:31:46.484495] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.092 [2024-12-16 21:31:46.484507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.484512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.092 [2024-12-16 21:31:46.484518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:25:57.092 [2024-12-16 21:31:46.484524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.485426] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:57.092 [2024-12-16 21:31:46.487864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.487891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:57.092 [2024-12-16 21:31:46.487898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:25:57.092 [2024-12-16 21:31:46.487904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.487946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.487953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:57.092 [2024-12-16 21:31:46.487960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:57.092 [2024-12-16 21:31:46.487966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.492182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.492204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.092 [2024-12-16 21:31:46.492213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.183 ms 00:25:57.092 [2024-12-16 21:31:46.492219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.492281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.492289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.092 [2024-12-16 21:31:46.492295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:57.092 [2024-12-16 21:31:46.492303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.492337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.492344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.092 [2024-12-16 21:31:46.492351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.092 [2024-12-16 21:31:46.492356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.492373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.092 [2024-12-16 21:31:46.493519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.493539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.092 [2024-12-16 21:31:46.493546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:25:57.092 [2024-12-16 21:31:46.493554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.493577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.493583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.092 [2024-12-16 21:31:46.493589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:57.092 [2024-12-16 21:31:46.493594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.493615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:57.092 [2024-12-16 21:31:46.493644] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:57.092 [2024-12-16 21:31:46.493677] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:57.092 [2024-12-16 21:31:46.493691] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:57.092 [2024-12-16 21:31:46.493769] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.092 [2024-12-16 21:31:46.493780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.092 [2024-12-16 21:31:46.493789] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:57.092 [2024-12-16 21:31:46.493796] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.092 [2024-12-16 21:31:46.493806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.092 [2024-12-16 21:31:46.493812] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:57.092 [2024-12-16 21:31:46.493817] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.092 [2024-12-16 21:31:46.493823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.092 [2024-12-16 21:31:46.493830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.092 [2024-12-16 21:31:46.493836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.493842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.092 [2024-12-16 21:31:46.493847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:25:57.092 [2024-12-16 21:31:46.493856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.493918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-12-16 21:31:46.493925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.092 [2024-12-16 21:31:46.493930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:57.092 [2024-12-16 21:31:46.493940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-12-16 21:31:46.494012] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.092 [2024-12-16 21:31:46.494021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.092 [2024-12-16 21:31:46.494028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.092 [2024-12-16 21:31:46.494033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.092 [2024-12-16 21:31:46.494044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:57.092 [2024-12-16 21:31:46.494056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.092 [2024-12-16 21:31:46.494062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.092 [2024-12-16 21:31:46.494072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.092 [2024-12-16 21:31:46.494077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:57.092 [2024-12-16 21:31:46.494081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.092 [2024-12-16 21:31:46.494086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.092 [2024-12-16 21:31:46.494091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:57.092 [2024-12-16 21:31:46.494100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.092 [2024-12-16 21:31:46.494110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:57.092 [2024-12-16 21:31:46.494115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.092 [2024-12-16 21:31:46.494127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.092 [2024-12-16 21:31:46.494138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.092 [2024-12-16 21:31:46.494142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.092 [2024-12-16 21:31:46.494152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.092 [2024-12-16 21:31:46.494156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.092 [2024-12-16 21:31:46.494167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.092 [2024-12-16 21:31:46.494172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.092 [2024-12-16 21:31:46.494186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.092 [2024-12-16 21:31:46.494192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.092 [2024-12-16 21:31:46.494203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.092 [2024-12-16 21:31:46.494208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:57.092 [2024-12-16 21:31:46.494214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.092 [2024-12-16 21:31:46.494219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.092 [2024-12-16 21:31:46.494225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:57.092 [2024-12-16 21:31:46.494231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.092 [2024-12-16 21:31:46.494242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:57.092 [2024-12-16 21:31:46.494248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.092 [2024-12-16 21:31:46.494260] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.093 [2024-12-16 21:31:46.494267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.093 [2024-12-16 21:31:46.494274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.093 [2024-12-16 21:31:46.494280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.093 [2024-12-16 21:31:46.494288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.093 [2024-12-16 21:31:46.494294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.093 [2024-12-16 21:31:46.494299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.093 [2024-12-16 21:31:46.494305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.093 [2024-12-16 21:31:46.494310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.093 [2024-12-16 21:31:46.494316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.093 [2024-12-16 21:31:46.494323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.093 [2024-12-16 21:31:46.494331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.093 [2024-12-16 21:31:46.494338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:57.093 [2024-12-16 21:31:46.494344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:57.093 [2024-12-16 21:31:46.494350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:57.093 [2024-12-16 21:31:46.494357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:57.093 [2024-12-16 21:31:46.494363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:57.093 [2024-12-16 21:31:46.494373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:57.093 [2024-12-16 21:31:46.494379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:57.093 [2024-12-16 21:31:46.494384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:57.093 [2024-12-16 21:31:46.494392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:57.093 [2024-12-16 21:31:46.494398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:57.093 [2024-12-16 21:31:46.494404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:57.093 [2024-12-16 21:31:46.494411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:57.093 [2024-12-16 21:31:46.494417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:57.093 [2024-12-16 21:31:46.494424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:57.093 [2024-12-16 21:31:46.494430] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.093 [2024-12-16 21:31:46.494438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.093 [2024-12-16 21:31:46.494447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.093 [2024-12-16 21:31:46.494454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.093 [2024-12-16 21:31:46.494460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.093 [2024-12-16 21:31:46.494466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.093 [2024-12-16 21:31:46.494473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.494479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.093 [2024-12-16 21:31:46.494486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:25:57.093 [2024-12-16 21:31:46.494493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.502127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.502154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.093 [2024-12-16 21:31:46.502162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.603 ms 00:25:57.093 [2024-12-16 21:31:46.502168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.502229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.502240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.093 [2024-12-16 21:31:46.502246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:57.093 [2024-12-16 21:31:46.502252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.520106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.520142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.093 [2024-12-16 21:31:46.520153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.829 ms 00:25:57.093 [2024-12-16 21:31:46.520167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.520212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.520222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.093 [2024-12-16 21:31:46.520230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.093 [2024-12-16 21:31:46.520238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.520567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.520594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.093 [2024-12-16 21:31:46.520604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:25:57.093 [2024-12-16 21:31:46.520615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.520746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.520758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.093 [2024-12-16 21:31:46.520767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:57.093 [2024-12-16 21:31:46.520775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.526068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.526105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.093 [2024-12-16 21:31:46.526117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:25:57.093 [2024-12-16 21:31:46.526130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.528886] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:57.093 [2024-12-16 21:31:46.528923] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.093 [2024-12-16 21:31:46.528936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.528946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.093 [2024-12-16 21:31:46.528956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:25:57.093 [2024-12-16 21:31:46.528964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.541449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.541476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.093 [2024-12-16 21:31:46.541484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.444 ms 00:25:57.093 [2024-12-16 21:31:46.541490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.543206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.543230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.093 [2024-12-16 21:31:46.543237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:25:57.093 [2024-12-16 21:31:46.543242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.544879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.544903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.093 [2024-12-16 21:31:46.544910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:25:57.093 [2024-12-16 21:31:46.544915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.545154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.545164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.093 [2024-12-16 21:31:46.545171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:25:57.093 [2024-12-16 21:31:46.545187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.560296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.560327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.093 [2024-12-16 21:31:46.560336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.097 ms 00:25:57.093 [2024-12-16 21:31:46.560342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.566037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.093 [2024-12-16 21:31:46.568011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.568034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.093 [2024-12-16 21:31:46.568049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.641 ms 00:25:57.093 [2024-12-16 21:31:46.568055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.568091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.568098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.093 [2024-12-16 21:31:46.568109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:57.093 [2024-12-16 21:31:46.568115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.568168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.568176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.093 [2024-12-16 21:31:46.568182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:57.093 [2024-12-16 21:31:46.568188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-12-16 21:31:46.568201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-12-16 21:31:46.568208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.094 [2024-12-16 21:31:46.568214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.094 [2024-12-16 21:31:46.568222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-12-16 21:31:46.568245] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.094 [2024-12-16 21:31:46.568253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-12-16 21:31:46.568259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.094 [2024-12-16 21:31:46.568268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:57.094 [2024-12-16 21:31:46.568274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-12-16 21:31:46.571610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-12-16 21:31:46.571644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.094 [2024-12-16 21:31:46.571651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:25:57.094 [2024-12-16 21:31:46.571657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-12-16 21:31:46.571710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-12-16 21:31:46.571718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.094 [2024-12-16 21:31:46.571724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:57.094 [2024-12-16 21:31:46.571730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-12-16 21:31:46.572933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.423 ms, result 0 00:25:58.036  [2024-12-16T21:31:48.677Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-16T21:31:49.611Z] Copying: 34/1024 [MB] (13 MBps) [2024-12-16T21:31:50.985Z] Copying: 51/1024 [MB] (17 MBps) [2024-12-16T21:31:51.920Z] Copying: 64/1024 [MB] (13 MBps) [2024-12-16T21:31:52.855Z] Copying: 77/1024 [MB] (13 MBps) [2024-12-16T21:31:53.790Z] Copying: 90/1024 [MB] (12 MBps) [2024-12-16T21:31:54.726Z] Copying: 104/1024 [MB] (14 MBps) [2024-12-16T21:31:55.665Z] Copying: 119/1024 [MB] (14 MBps) [2024-12-16T21:31:56.600Z] Copying: 134/1024 [MB] (14 MBps) [2024-12-16T21:31:57.973Z] Copying: 148/1024 [MB] (14 MBps) [2024-12-16T21:31:58.907Z] Copying: 163/1024 [MB] (14 MBps) [2024-12-16T21:31:59.842Z] Copying: 182/1024 [MB] (19 MBps) [2024-12-16T21:32:00.781Z] Copying: 197/1024 [MB] (14 MBps) [2024-12-16T21:32:01.730Z] Copying: 209/1024 [MB] (12 MBps) [2024-12-16T21:32:02.676Z] Copying: 223/1024 [MB] (13 MBps) [2024-12-16T21:32:03.608Z] Copying: 240/1024 [MB] (17 MBps) [2024-12-16T21:32:04.979Z] Copying: 255/1024 [MB] (14 MBps) [2024-12-16T21:32:05.918Z] Copying: 276/1024 [MB] (21 MBps) [2024-12-16T21:32:06.862Z] Copying: 291/1024 [MB] (14 MBps) [2024-12-16T21:32:07.797Z] Copying: 302/1024 [MB] (11 MBps) [2024-12-16T21:32:08.734Z] Copying: 319/1024 [MB] (17 MBps) [2024-12-16T21:32:09.668Z] Copying: 332/1024 [MB] (12 MBps) [2024-12-16T21:32:10.602Z] Copying: 346/1024 [MB] (14 MBps) [2024-12-16T21:32:11.983Z] Copying: 361/1024 [MB] (14 MBps) [2024-12-16T21:32:12.924Z] Copying: 377/1024 [MB] (16 MBps) [2024-12-16T21:32:13.859Z] Copying: 394/1024 [MB] (17 MBps) [2024-12-16T21:32:14.801Z] Copying: 408/1024 [MB] (14 MBps) [2024-12-16T21:32:15.742Z] Copying: 419/1024 [MB] (10 MBps) [2024-12-16T21:32:16.683Z] Copying: 430/1024 [MB] (10 MBps) [2024-12-16T21:32:17.617Z] Copying: 440/1024 [MB] (10 MBps) [2024-12-16T21:32:18.990Z] Copying: 453/1024 [MB] (12 MBps) [2024-12-16T21:32:19.924Z] Copying: 466/1024 [MB] (12 MBps) [2024-12-16T21:32:20.862Z] Copying: 479/1024 [MB] (12 MBps) [2024-12-16T21:32:21.796Z] Copying: 489/1024 [MB] (10 MBps) [2024-12-16T21:32:22.736Z] Copying: 504/1024 [MB] (14 MBps) [2024-12-16T21:32:23.679Z] Copying: 516/1024 [MB] (12 MBps) [2024-12-16T21:32:24.614Z] Copying: 528/1024 [MB] (11 MBps) [2024-12-16T21:32:25.991Z] Copying: 541/1024 [MB] (12 MBps) [2024-12-16T21:32:26.925Z] Copying: 554/1024 [MB] (13 MBps) [2024-12-16T21:32:27.859Z] Copying: 567/1024 [MB] (12 MBps) [2024-12-16T21:32:28.794Z] Copying: 581/1024 [MB] (14 MBps) [2024-12-16T21:32:29.728Z] Copying: 595/1024 [MB] (14 MBps) [2024-12-16T21:32:30.766Z] Copying: 609/1024 [MB] (13 MBps) [2024-12-16T21:32:31.708Z] Copying: 623/1024 [MB] (13 MBps) [2024-12-16T21:32:32.647Z] Copying: 648456/1048576 [kB] (10108 kBps) [2024-12-16T21:32:33.587Z] Copying: 647/1024 [MB] (13 MBps) [2024-12-16T21:32:34.971Z] Copying: 659/1024 [MB] (12 MBps) [2024-12-16T21:32:35.903Z] Copying: 669/1024 [MB] (10 MBps) [2024-12-16T21:32:36.836Z] Copying: 683/1024 [MB] (14 MBps) [2024-12-16T21:32:37.770Z] Copying: 698/1024 [MB] (14 MBps) [2024-12-16T21:32:38.706Z] Copying: 712/1024 [MB] (14 MBps) [2024-12-16T21:32:39.641Z] Copying: 726/1024 [MB] (14 MBps) [2024-12-16T21:32:41.022Z] Copying: 741/1024 [MB] (14 MBps) [2024-12-16T21:32:41.594Z] Copying: 758/1024 [MB] (16 MBps) [2024-12-16T21:32:42.972Z] Copying: 770/1024 [MB] (11 MBps) [2024-12-16T21:32:43.911Z] Copying: 780/1024 [MB] (10 MBps) [2024-12-16T21:32:44.848Z] Copying: 793/1024 [MB] (13 MBps) [2024-12-16T21:32:45.788Z] Copying: 807/1024 [MB] (14 MBps) [2024-12-16T21:32:46.722Z] Copying: 821/1024 [MB] (13 MBps) [2024-12-16T21:32:47.659Z] Copying: 836/1024 [MB] (14 MBps) [2024-12-16T21:32:48.596Z] Copying: 860/1024 [MB] (23 MBps) [2024-12-16T21:32:49.976Z] Copying: 872/1024 [MB] (12 MBps) [2024-12-16T21:32:50.917Z] Copying: 884/1024 [MB] (12 MBps) [2024-12-16T21:32:51.854Z] Copying: 896/1024 [MB] (11 MBps) [2024-12-16T21:32:52.789Z] Copying: 907/1024 [MB] (11 MBps) [2024-12-16T21:32:53.725Z] Copying: 921/1024 [MB] (13 MBps) [2024-12-16T21:32:54.662Z] Copying: 935/1024 [MB] (13 MBps) [2024-12-16T21:32:55.600Z] Copying: 948/1024 [MB] (12 MBps) [2024-12-16T21:32:56.975Z] Copying: 960/1024 [MB] (12 MBps) [2024-12-16T21:32:57.909Z] Copying: 974/1024 [MB] (14 MBps) [2024-12-16T21:32:58.846Z] Copying: 988/1024 [MB] (13 MBps) [2024-12-16T21:32:59.850Z] Copying: 1002/1024 [MB] (13 MBps) [2024-12-16T21:33:00.784Z] Copying: 1015/1024 [MB] (13 MBps) [2024-12-16T21:33:00.784Z] Copying: 1048372/1048576 [kB] (8356 kBps) [2024-12-16T21:33:00.784Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 21:33:00.695447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.085 [2024-12-16 21:33:00.695490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:11.085 [2024-12-16 21:33:00.695501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:11.085 [2024-12-16 21:33:00.695508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.085 [2024-12-16 21:33:00.695527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:11.085 [2024-12-16 21:33:00.699179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.085 [2024-12-16 21:33:00.699216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:11.085 [2024-12-16 21:33:00.699224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:27:11.085 [2024-12-16 21:33:00.699230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.085 [2024-12-16 21:33:00.706516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.085 [2024-12-16 21:33:00.706549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:11.085 [2024-12-16 21:33:00.706557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.240 ms 00:27:11.085 [2024-12-16 21:33:00.706563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.085 [2024-12-16 21:33:00.724439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.085 [2024-12-16 21:33:00.724474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:11.085 [2024-12-16 21:33:00.724482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.863 ms 00:27:11.085 [2024-12-16 21:33:00.724488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.085 [2024-12-16 21:33:00.729031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.085 [2024-12-16 21:33:00.729055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:11.085 [2024-12-16 21:33:00.729063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.525 ms 00:27:11.085 [2024-12-16 21:33:00.729069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.085 [2024-12-16 21:33:00.731270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.085 [2024-12-16 21:33:00.731297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:11.085 [2024-12-16 21:33:00.731305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.980 ms 00:27:11.085 [2024-12-16 21:33:00.731311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.085 [2024-12-16 21:33:00.735226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.085 [2024-12-16 21:33:00.735253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:11.085 [2024-12-16 21:33:00.735261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:27:11.085 [2024-12-16 21:33:00.735267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.347 [2024-12-16 21:33:00.933277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.347 [2024-12-16 21:33:00.933314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:11.347 [2024-12-16 21:33:00.933322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 197.983 ms 00:27:11.347 [2024-12-16 21:33:00.933328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.347 [2024-12-16 21:33:00.935817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.347 [2024-12-16 21:33:00.935843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:11.347 [2024-12-16 21:33:00.935851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:27:11.347 [2024-12-16 21:33:00.935857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.347 [2024-12-16 21:33:00.938192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.347 [2024-12-16 21:33:00.938217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:11.347 [2024-12-16 21:33:00.938224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:27:11.347 [2024-12-16 21:33:00.938229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.347 [2024-12-16 21:33:00.939663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.347 [2024-12-16 21:33:00.939687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:11.347 [2024-12-16 21:33:00.939693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:27:11.347 [2024-12-16 21:33:00.939698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.347 [2024-12-16 21:33:00.941259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.347 [2024-12-16 21:33:00.941285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:11.347 [2024-12-16 21:33:00.941291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:27:11.347 [2024-12-16 21:33:00.941296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.347 [2024-12-16 21:33:00.941318] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:11.347 [2024-12-16 21:33:00.941332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 81920 / 261120 wr_cnt: 1 state: open 00:27:11.347 [2024-12-16 21:33:00.941340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:11.347 [2024-12-16 21:33:00.941578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:11.348 [2024-12-16 21:33:00.941928] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:11.348 [2024-12-16 21:33:00.941936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04a3e9c6-0984-41f3-a8a4-585bb9630ddb 00:27:11.348 [2024-12-16 21:33:00.941943] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 81920 00:27:11.348 [2024-12-16 21:33:00.941948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 82880 00:27:11.348 [2024-12-16 21:33:00.941953] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 81920 00:27:11.348 [2024-12-16 21:33:00.941959] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0117 00:27:11.348 [2024-12-16 21:33:00.941965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:11.348 [2024-12-16 21:33:00.941970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:11.348 [2024-12-16 21:33:00.941975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:11.348 [2024-12-16 21:33:00.941980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:11.348 [2024-12-16 21:33:00.941984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:11.348 [2024-12-16 21:33:00.941990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.348 [2024-12-16 21:33:00.941995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:11.348 [2024-12-16 21:33:00.942003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:27:11.348 [2024-12-16 21:33:00.942012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.943222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.348 [2024-12-16 21:33:00.943242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:11.348 [2024-12-16 21:33:00.943250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:27:11.348 [2024-12-16 21:33:00.943256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.943326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.348 [2024-12-16 21:33:00.943335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:11.348 [2024-12-16 21:33:00.943342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:11.348 [2024-12-16 21:33:00.943348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.947472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.348 [2024-12-16 21:33:00.947496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:11.348 [2024-12-16 21:33:00.947504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.348 [2024-12-16 21:33:00.947510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.947549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.348 [2024-12-16 21:33:00.947557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:11.348 [2024-12-16 21:33:00.947562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.348 [2024-12-16 21:33:00.947568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.947594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.348 [2024-12-16 21:33:00.947601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:11.348 [2024-12-16 21:33:00.947608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.348 [2024-12-16 21:33:00.947613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.947642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.348 [2024-12-16 21:33:00.947651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:11.348 [2024-12-16 21:33:00.947657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.348 [2024-12-16 21:33:00.947663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.955208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.348 [2024-12-16 21:33:00.955239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:11.348 [2024-12-16 21:33:00.955247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.348 [2024-12-16 21:33:00.955253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.348 [2024-12-16 21:33:00.961410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.349 [2024-12-16 21:33:00.961442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:11.349 [2024-12-16 21:33:00.961450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.349 [2024-12-16 21:33:00.961456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.349 [2024-12-16 21:33:00.961489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.349 [2024-12-16 21:33:00.961496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:11.349 [2024-12-16 21:33:00.961502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.349 [2024-12-16 21:33:00.961508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.349 [2024-12-16 21:33:00.961526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.349 [2024-12-16 21:33:00.961533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:11.349 [2024-12-16 21:33:00.961543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.349 [2024-12-16 21:33:00.961548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.349 [2024-12-16 21:33:00.961597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.349 [2024-12-16 21:33:00.961605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:11.349 [2024-12-16 21:33:00.961612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.349 [2024-12-16 21:33:00.961618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.349 [2024-12-16 21:33:00.961654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.349 [2024-12-16 21:33:00.961665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:11.349 [2024-12-16 21:33:00.961671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.349 [2024-12-16 21:33:00.961678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.349 [2024-12-16 21:33:00.961708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.349 [2024-12-16 21:33:00.961716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:11.349 [2024-12-16 21:33:00.961722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.349 [2024-12-16 21:33:00.961733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.349 [2024-12-16 21:33:00.961769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.349 [2024-12-16 21:33:00.961780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:11.349 [2024-12-16 21:33:00.961788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.349 [2024-12-16 21:33:00.961794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.349 [2024-12-16 21:33:00.961882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.412 ms, result 0 00:27:11.918 00:27:11.918 00:27:11.918 21:33:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:13.833 21:33:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:14.092 [2024-12-16 21:33:03.579217] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:14.092 [2024-12-16 21:33:03.579350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94650 ] 00:27:14.092 [2024-12-16 21:33:03.721144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.092 [2024-12-16 21:33:03.739313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.351 [2024-12-16 21:33:03.822729] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.351 [2024-12-16 21:33:03.822784] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.351 [2024-12-16 21:33:03.972267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.351 [2024-12-16 21:33:03.972305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:14.351 [2024-12-16 21:33:03.972315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:14.351 [2024-12-16 21:33:03.972322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.351 [2024-12-16 21:33:03.972365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.351 [2024-12-16 21:33:03.972372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.351 [2024-12-16 21:33:03.972379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:14.351 [2024-12-16 21:33:03.972388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.351 [2024-12-16 21:33:03.972404] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:14.351 [2024-12-16 21:33:03.972617] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:14.351 [2024-12-16 21:33:03.972640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.351 [2024-12-16 21:33:03.972647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.351 [2024-12-16 21:33:03.972655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:27:14.351 [2024-12-16 21:33:03.972661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.351 [2024-12-16 21:33:03.973588] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:14.351 [2024-12-16 21:33:03.975621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.351 [2024-12-16 21:33:03.975661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:14.351 [2024-12-16 21:33:03.975668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:27:14.351 [2024-12-16 21:33:03.975680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.351 [2024-12-16 21:33:03.975723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.351 [2024-12-16 21:33:03.975730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:14.351 [2024-12-16 21:33:03.975737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:14.351 [2024-12-16 21:33:03.975746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.351 [2024-12-16 21:33:03.980093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.351 [2024-12-16 21:33:03.980121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.351 [2024-12-16 21:33:03.980131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.316 ms 00:27:14.352 [2024-12-16 21:33:03.980136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.352 [2024-12-16 21:33:03.980201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.352 [2024-12-16 21:33:03.980208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.352 [2024-12-16 21:33:03.980214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:14.352 [2024-12-16 21:33:03.980220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.352 [2024-12-16 21:33:03.980255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.352 [2024-12-16 21:33:03.980263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:14.352 [2024-12-16 21:33:03.980269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:14.352 [2024-12-16 21:33:03.980278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.352 [2024-12-16 21:33:03.980293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:14.352 [2024-12-16 21:33:03.981455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.352 [2024-12-16 21:33:03.981475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.352 [2024-12-16 21:33:03.981485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:27:14.352 [2024-12-16 21:33:03.981493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.352 [2024-12-16 21:33:03.981520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.352 [2024-12-16 21:33:03.981526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:14.352 [2024-12-16 21:33:03.981533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:14.352 [2024-12-16 21:33:03.981543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.352 [2024-12-16 21:33:03.981557] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:14.352 [2024-12-16 21:33:03.981573] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:14.352 [2024-12-16 21:33:03.981605] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:14.352 [2024-12-16 21:33:03.981617] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:14.352 [2024-12-16 21:33:03.981707] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:14.352 [2024-12-16 21:33:03.981717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:14.352 [2024-12-16 21:33:03.981727] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:14.352 [2024-12-16 21:33:03.981735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:14.352 [2024-12-16 21:33:03.981742] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:14.352 [2024-12-16 21:33:03.981748] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:14.352 [2024-12-16 21:33:03.981756] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:14.352 [2024-12-16 21:33:03.981762] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:14.352 [2024-12-16 21:33:03.981767] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:14.352 [2024-12-16 21:33:03.981773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.352 [2024-12-16 21:33:03.981779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:14.352 [2024-12-16 21:33:03.981788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:27:14.352 [2024-12-16 21:33:03.981793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.352 [2024-12-16 21:33:03.981861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.352 [2024-12-16 21:33:03.981868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:14.352 [2024-12-16 21:33:03.981874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:14.352 [2024-12-16 21:33:03.981879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.352 [2024-12-16 21:33:03.981950] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:14.352 [2024-12-16 21:33:03.981965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:14.352 [2024-12-16 21:33:03.981971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.352 [2024-12-16 21:33:03.981977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.352 [2024-12-16 21:33:03.981985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:14.352 [2024-12-16 21:33:03.981991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:14.352 [2024-12-16 21:33:03.981997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:14.352 [2024-12-16 21:33:03.982008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.352 [2024-12-16 21:33:03.982018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:14.352 [2024-12-16 21:33:03.982023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:14.352 [2024-12-16 21:33:03.982028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.352 [2024-12-16 21:33:03.982034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:14.352 [2024-12-16 21:33:03.982040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:14.352 [2024-12-16 21:33:03.982045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:14.352 [2024-12-16 21:33:03.982056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:14.352 [2024-12-16 21:33:03.982072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:14.352 [2024-12-16 21:33:03.982088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:14.352 [2024-12-16 21:33:03.982103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:14.352 [2024-12-16 21:33:03.982119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:14.352 [2024-12-16 21:33:03.982137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.352 [2024-12-16 21:33:03.982148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:14.352 [2024-12-16 21:33:03.982153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:14.352 [2024-12-16 21:33:03.982162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.352 [2024-12-16 21:33:03.982168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:14.352 [2024-12-16 21:33:03.982175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:14.352 [2024-12-16 21:33:03.982181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:14.352 [2024-12-16 21:33:03.982192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:14.352 [2024-12-16 21:33:03.982198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982203] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:14.352 [2024-12-16 21:33:03.982212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:14.352 [2024-12-16 21:33:03.982219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.352 [2024-12-16 21:33:03.982231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:14.352 [2024-12-16 21:33:03.982237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:14.352 [2024-12-16 21:33:03.982243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:14.352 [2024-12-16 21:33:03.982249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:14.352 [2024-12-16 21:33:03.982254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:14.352 [2024-12-16 21:33:03.982262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:14.352 [2024-12-16 21:33:03.982270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:14.352 [2024-12-16 21:33:03.982278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.352 [2024-12-16 21:33:03.982288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:14.352 [2024-12-16 21:33:03.982294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:14.352 [2024-12-16 21:33:03.982300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:14.352 [2024-12-16 21:33:03.982306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:14.352 [2024-12-16 21:33:03.982312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:14.352 [2024-12-16 21:33:03.982318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:14.352 [2024-12-16 21:33:03.982324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:14.352 [2024-12-16 21:33:03.982330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:14.352 [2024-12-16 21:33:03.982337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:14.353 [2024-12-16 21:33:03.982346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:14.353 [2024-12-16 21:33:03.982353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:14.353 [2024-12-16 21:33:03.982359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:14.353 [2024-12-16 21:33:03.982365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:14.353 [2024-12-16 21:33:03.982372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:14.353 [2024-12-16 21:33:03.982380] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:14.353 [2024-12-16 21:33:03.982387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.353 [2024-12-16 21:33:03.982394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.353 [2024-12-16 21:33:03.982401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:14.353 [2024-12-16 21:33:03.982407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:14.353 [2024-12-16 21:33:03.982413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:14.353 [2024-12-16 21:33:03.982419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:03.982426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:14.353 [2024-12-16 21:33:03.982432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:27:14.353 [2024-12-16 21:33:03.982440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:03.990323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:03.990351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:14.353 [2024-12-16 21:33:03.990362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.844 ms 00:27:14.353 [2024-12-16 21:33:03.990371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:03.990433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:03.990438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:14.353 [2024-12-16 21:33:03.990444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:14.353 [2024-12-16 21:33:03.990449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.007200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.007240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.353 [2024-12-16 21:33:04.007252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.715 ms 00:27:14.353 [2024-12-16 21:33:04.007260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.007289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.007302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.353 [2024-12-16 21:33:04.007310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:14.353 [2024-12-16 21:33:04.007317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.007684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.007708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.353 [2024-12-16 21:33:04.007718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:27:14.353 [2024-12-16 21:33:04.007725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.007846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.007862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.353 [2024-12-16 21:33:04.007871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:27:14.353 [2024-12-16 21:33:04.007880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.013039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.013076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.353 [2024-12-16 21:33:04.013087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.134 ms 00:27:14.353 [2024-12-16 21:33:04.013099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.015846] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:14.353 [2024-12-16 21:33:04.015884] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:14.353 [2024-12-16 21:33:04.015904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.015913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:14.353 [2024-12-16 21:33:04.015922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:27:14.353 [2024-12-16 21:33:04.015930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.029059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.029095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:14.353 [2024-12-16 21:33:04.029104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.090 ms 00:27:14.353 [2024-12-16 21:33:04.029110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.030750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.030779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:14.353 [2024-12-16 21:33:04.030786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:27:14.353 [2024-12-16 21:33:04.030796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.032329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.032356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:14.353 [2024-12-16 21:33:04.032363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:27:14.353 [2024-12-16 21:33:04.032369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.032605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.032621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:14.353 [2024-12-16 21:33:04.032640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:27:14.353 [2024-12-16 21:33:04.032646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.353 [2024-12-16 21:33:04.048073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.353 [2024-12-16 21:33:04.048106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:14.353 [2024-12-16 21:33:04.048115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.409 ms 00:27:14.353 [2024-12-16 21:33:04.048125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.611 [2024-12-16 21:33:04.053828] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:14.612 [2024-12-16 21:33:04.055936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.612 [2024-12-16 21:33:04.055966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:14.612 [2024-12-16 21:33:04.055974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.780 ms 00:27:14.612 [2024-12-16 21:33:04.055980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.612 [2024-12-16 21:33:04.056022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.612 [2024-12-16 21:33:04.056030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:14.612 [2024-12-16 21:33:04.056042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:14.612 [2024-12-16 21:33:04.056052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.612 [2024-12-16 21:33:04.057031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.612 [2024-12-16 21:33:04.057059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:14.612 [2024-12-16 21:33:04.057068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:27:14.612 [2024-12-16 21:33:04.057076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.612 [2024-12-16 21:33:04.057093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.612 [2024-12-16 21:33:04.057100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:14.612 [2024-12-16 21:33:04.057105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:14.612 [2024-12-16 21:33:04.057111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.612 [2024-12-16 21:33:04.057137] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:14.612 [2024-12-16 21:33:04.057144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.612 [2024-12-16 21:33:04.057164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:14.612 [2024-12-16 21:33:04.057174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:14.612 [2024-12-16 21:33:04.057181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.612 [2024-12-16 21:33:04.060501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.612 [2024-12-16 21:33:04.060534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:14.612 [2024-12-16 21:33:04.060542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.305 ms 00:27:14.612 [2024-12-16 21:33:04.060552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.612 [2024-12-16 21:33:04.060611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.612 [2024-12-16 21:33:04.060619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:14.612 [2024-12-16 21:33:04.060636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:14.612 [2024-12-16 21:33:04.060642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.612 [2024-12-16 21:33:04.061397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.797 ms, result 0 00:27:15.557  [2024-12-16T21:33:06.640Z] Copying: 1360/1048576 [kB] (1360 kBps) [2024-12-16T21:33:07.211Z] Copying: 4296/1048576 [kB] (2936 kBps) [2024-12-16T21:33:08.584Z] Copying: 13948/1048576 [kB] (9652 kBps) [2024-12-16T21:33:09.520Z] Copying: 33/1024 [MB] (19 MBps) [2024-12-16T21:33:10.460Z] Copying: 59/1024 [MB] (25 MBps) [2024-12-16T21:33:11.393Z] Copying: 74/1024 [MB] (15 MBps) [2024-12-16T21:33:12.332Z] Copying: 92/1024 [MB] (17 MBps) [2024-12-16T21:33:13.267Z] Copying: 108/1024 [MB] (16 MBps) [2024-12-16T21:33:14.208Z] Copying: 126/1024 [MB] (18 MBps) [2024-12-16T21:33:15.591Z] Copying: 143/1024 [MB] (16 MBps) [2024-12-16T21:33:16.532Z] Copying: 163/1024 [MB] (19 MBps) [2024-12-16T21:33:17.472Z] Copying: 181/1024 [MB] (18 MBps) [2024-12-16T21:33:18.412Z] Copying: 197/1024 [MB] (16 MBps) [2024-12-16T21:33:19.345Z] Copying: 213/1024 [MB] (16 MBps) [2024-12-16T21:33:20.288Z] Copying: 231/1024 [MB] (17 MBps) [2024-12-16T21:33:21.228Z] Copying: 249/1024 [MB] (18 MBps) [2024-12-16T21:33:22.604Z] Copying: 266/1024 [MB] (16 MBps) [2024-12-16T21:33:23.601Z] Copying: 289/1024 [MB] (22 MBps) [2024-12-16T21:33:24.556Z] Copying: 308/1024 [MB] (19 MBps) [2024-12-16T21:33:25.498Z] Copying: 329/1024 [MB] (20 MBps) [2024-12-16T21:33:26.441Z] Copying: 344/1024 [MB] (15 MBps) [2024-12-16T21:33:27.386Z] Copying: 367/1024 [MB] (22 MBps) [2024-12-16T21:33:28.324Z] Copying: 382/1024 [MB] (15 MBps) [2024-12-16T21:33:29.258Z] Copying: 399/1024 [MB] (16 MBps) [2024-12-16T21:33:30.632Z] Copying: 430/1024 [MB] (30 MBps) [2024-12-16T21:33:31.566Z] Copying: 454/1024 [MB] (24 MBps) [2024-12-16T21:33:32.500Z] Copying: 472/1024 [MB] (18 MBps) [2024-12-16T21:33:33.434Z] Copying: 491/1024 [MB] (19 MBps) [2024-12-16T21:33:34.369Z] Copying: 511/1024 [MB] (19 MBps) [2024-12-16T21:33:35.305Z] Copying: 530/1024 [MB] (19 MBps) [2024-12-16T21:33:36.242Z] Copying: 556/1024 [MB] (26 MBps) [2024-12-16T21:33:37.621Z] Copying: 582/1024 [MB] (25 MBps) [2024-12-16T21:33:38.564Z] Copying: 605/1024 [MB] (23 MBps) [2024-12-16T21:33:39.507Z] Copying: 621/1024 [MB] (15 MBps) [2024-12-16T21:33:40.449Z] Copying: 637/1024 [MB] (16 MBps) [2024-12-16T21:33:41.386Z] Copying: 654/1024 [MB] (17 MBps) [2024-12-16T21:33:42.322Z] Copying: 670/1024 [MB] (15 MBps) [2024-12-16T21:33:43.257Z] Copying: 690/1024 [MB] (19 MBps) [2024-12-16T21:33:44.632Z] Copying: 710/1024 [MB] (20 MBps) [2024-12-16T21:33:45.201Z] Copying: 730/1024 [MB] (19 MBps) [2024-12-16T21:33:46.575Z] Copying: 749/1024 [MB] (19 MBps) [2024-12-16T21:33:47.529Z] Copying: 769/1024 [MB] (19 MBps) [2024-12-16T21:33:48.464Z] Copying: 787/1024 [MB] (17 MBps) [2024-12-16T21:33:49.401Z] Copying: 804/1024 [MB] (17 MBps) [2024-12-16T21:33:50.416Z] Copying: 823/1024 [MB] (19 MBps) [2024-12-16T21:33:51.355Z] Copying: 839/1024 [MB] (16 MBps) [2024-12-16T21:33:52.289Z] Copying: 856/1024 [MB] (17 MBps) [2024-12-16T21:33:53.232Z] Copying: 875/1024 [MB] (18 MBps) [2024-12-16T21:33:54.612Z] Copying: 895/1024 [MB] (20 MBps) [2024-12-16T21:33:55.548Z] Copying: 912/1024 [MB] (16 MBps) [2024-12-16T21:33:56.485Z] Copying: 931/1024 [MB] (18 MBps) [2024-12-16T21:33:57.422Z] Copying: 948/1024 [MB] (17 MBps) [2024-12-16T21:33:58.365Z] Copying: 967/1024 [MB] (18 MBps) [2024-12-16T21:33:59.308Z] Copying: 984/1024 [MB] (16 MBps) [2024-12-16T21:34:00.242Z] Copying: 999/1024 [MB] (15 MBps) [2024-12-16T21:34:00.501Z] Copying: 1018/1024 [MB] (18 MBps) [2024-12-16T21:34:00.762Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-16 21:34:00.673998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.674803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:11.062 [2024-12-16 21:34:00.675139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:11.062 [2024-12-16 21:34:00.675220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.675272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:11.062 [2024-12-16 21:34:00.675885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.675977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:11.062 [2024-12-16 21:34:00.676032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:28:11.062 [2024-12-16 21:34:00.676054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.676282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.676307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:11.062 [2024-12-16 21:34:00.676326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:28:11.062 [2024-12-16 21:34:00.676378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.689605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.689643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:11.062 [2024-12-16 21:34:00.689656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.138 ms 00:28:11.062 [2024-12-16 21:34:00.689663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.694178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.694198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:11.062 [2024-12-16 21:34:00.694205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.490 ms 00:28:11.062 [2024-12-16 21:34:00.694212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.696074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.696097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:11.062 [2024-12-16 21:34:00.696104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:28:11.062 [2024-12-16 21:34:00.696110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.699791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.699819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:11.062 [2024-12-16 21:34:00.699826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.657 ms 00:28:11.062 [2024-12-16 21:34:00.699833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.703571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.703591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:11.062 [2024-12-16 21:34:00.703599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:28:11.062 [2024-12-16 21:34:00.703606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.706060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.706082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:11.062 [2024-12-16 21:34:00.706089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.443 ms 00:28:11.062 [2024-12-16 21:34:00.706095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.708459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.708479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:11.062 [2024-12-16 21:34:00.708486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:28:11.062 [2024-12-16 21:34:00.708491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.710129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.710150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:11.062 [2024-12-16 21:34:00.710156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:28:11.062 [2024-12-16 21:34:00.710162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.711647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.062 [2024-12-16 21:34:00.711667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:11.062 [2024-12-16 21:34:00.711673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:28:11.062 [2024-12-16 21:34:00.711678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.062 [2024-12-16 21:34:00.711699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:11.063 [2024-12-16 21:34:00.711710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:11.063 [2024-12-16 21:34:00.711718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:11.063 [2024-12-16 21:34:00.711724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.711998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:11.063 [2024-12-16 21:34:00.712239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:11.064 [2024-12-16 21:34:00.712301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:11.064 [2024-12-16 21:34:00.712310] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04a3e9c6-0984-41f3-a8a4-585bb9630ddb 00:28:11.064 [2024-12-16 21:34:00.712321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:11.064 [2024-12-16 21:34:00.712327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 182720 00:28:11.064 [2024-12-16 21:34:00.712332] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 180736 00:28:11.064 [2024-12-16 21:34:00.712342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0110 00:28:11.064 [2024-12-16 21:34:00.712353] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:11.064 [2024-12-16 21:34:00.712359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:11.064 [2024-12-16 21:34:00.712364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:11.064 [2024-12-16 21:34:00.712369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:11.064 [2024-12-16 21:34:00.712374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:11.064 [2024-12-16 21:34:00.712379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.064 [2024-12-16 21:34:00.712388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:11.064 [2024-12-16 21:34:00.712397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:28:11.064 [2024-12-16 21:34:00.712402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.713660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.064 [2024-12-16 21:34:00.713679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:11.064 [2024-12-16 21:34:00.713686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:28:11.064 [2024-12-16 21:34:00.713692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.713760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.064 [2024-12-16 21:34:00.713767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:11.064 [2024-12-16 21:34:00.713777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:11.064 [2024-12-16 21:34:00.713783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.717850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.717871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:11.064 [2024-12-16 21:34:00.717882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.717890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.717926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.717932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:11.064 [2024-12-16 21:34:00.717941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.717946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.717987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.717994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:11.064 [2024-12-16 21:34:00.718001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.718007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.718018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.718025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:11.064 [2024-12-16 21:34:00.718031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.718045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.725533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.725556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.064 [2024-12-16 21:34:00.725564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.725570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.732203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:11.064 [2024-12-16 21:34:00.732214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.732224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.732250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.064 [2024-12-16 21:34:00.732256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.732262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.732299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.064 [2024-12-16 21:34:00.732305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.732311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.732368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.064 [2024-12-16 21:34:00.732374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.732380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.732406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:11.064 [2024-12-16 21:34:00.732412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.732417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.732455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.064 [2024-12-16 21:34:00.732461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.732467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.064 [2024-12-16 21:34:00.732504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.064 [2024-12-16 21:34:00.732510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.064 [2024-12-16 21:34:00.732518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.064 [2024-12-16 21:34:00.732614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.612 ms, result 0 00:28:11.325 00:28:11.325 00:28:11.325 21:34:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:13.869 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:13.869 21:34:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:13.869 [2024-12-16 21:34:03.124459] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:13.869 [2024-12-16 21:34:03.124565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95257 ] 00:28:13.869 [2024-12-16 21:34:03.269456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.869 [2024-12-16 21:34:03.296812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.869 [2024-12-16 21:34:03.406458] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.869 [2024-12-16 21:34:03.406539] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.869 [2024-12-16 21:34:03.568857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.869 [2024-12-16 21:34:03.568922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:13.869 [2024-12-16 21:34:03.568937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.869 [2024-12-16 21:34:03.568950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.869 [2024-12-16 21:34:03.569024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.869 [2024-12-16 21:34:03.569036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.869 [2024-12-16 21:34:03.569045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:13.869 [2024-12-16 21:34:03.569064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.869 [2024-12-16 21:34:03.569097] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:13.869 [2024-12-16 21:34:03.569514] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:13.869 [2024-12-16 21:34:03.569570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.869 [2024-12-16 21:34:03.569579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.869 [2024-12-16 21:34:03.569592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:28:13.869 [2024-12-16 21:34:03.569600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.571530] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:14.132 [2024-12-16 21:34:03.575612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.132 [2024-12-16 21:34:03.575688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:14.132 [2024-12-16 21:34:03.575699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:28:14.132 [2024-12-16 21:34:03.575715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.575801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.132 [2024-12-16 21:34:03.575812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:14.132 [2024-12-16 21:34:03.575822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:14.132 [2024-12-16 21:34:03.575829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.584310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.132 [2024-12-16 21:34:03.584360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.132 [2024-12-16 21:34:03.584375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.436 ms 00:28:14.132 [2024-12-16 21:34:03.584383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.584488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.132 [2024-12-16 21:34:03.584499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.132 [2024-12-16 21:34:03.584510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:14.132 [2024-12-16 21:34:03.584518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.584578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.132 [2024-12-16 21:34:03.584589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:14.132 [2024-12-16 21:34:03.584603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:14.132 [2024-12-16 21:34:03.584618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.584675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:14.132 [2024-12-16 21:34:03.586728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.132 [2024-12-16 21:34:03.586769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.132 [2024-12-16 21:34:03.586779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:28:14.132 [2024-12-16 21:34:03.586786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.586831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.132 [2024-12-16 21:34:03.586844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:14.132 [2024-12-16 21:34:03.586856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:14.132 [2024-12-16 21:34:03.586870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.132 [2024-12-16 21:34:03.586889] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:14.132 [2024-12-16 21:34:03.586912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:14.132 [2024-12-16 21:34:03.586954] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:14.132 [2024-12-16 21:34:03.586971] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:14.132 [2024-12-16 21:34:03.587077] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:14.132 [2024-12-16 21:34:03.587096] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:14.132 [2024-12-16 21:34:03.587107] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:14.133 [2024-12-16 21:34:03.587118] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587127] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587136] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:14.133 [2024-12-16 21:34:03.587144] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:14.133 [2024-12-16 21:34:03.587151] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:14.133 [2024-12-16 21:34:03.587159] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:14.133 [2024-12-16 21:34:03.587168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.133 [2024-12-16 21:34:03.587176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:14.133 [2024-12-16 21:34:03.587190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:28:14.133 [2024-12-16 21:34:03.587200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.133 [2024-12-16 21:34:03.587288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.133 [2024-12-16 21:34:03.587297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:14.133 [2024-12-16 21:34:03.587304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:14.133 [2024-12-16 21:34:03.587311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.133 [2024-12-16 21:34:03.587407] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:14.133 [2024-12-16 21:34:03.587421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:14.133 [2024-12-16 21:34:03.587435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:14.133 [2024-12-16 21:34:03.587469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:14.133 [2024-12-16 21:34:03.587496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:14.133 [2024-12-16 21:34:03.587515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:14.133 [2024-12-16 21:34:03.587523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:14.133 [2024-12-16 21:34:03.587532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:14.133 [2024-12-16 21:34:03.587541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:14.133 [2024-12-16 21:34:03.587551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:14.133 [2024-12-16 21:34:03.587563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:14.133 [2024-12-16 21:34:03.587581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:14.133 [2024-12-16 21:34:03.587607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:14.133 [2024-12-16 21:34:03.587646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:14.133 [2024-12-16 21:34:03.587675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:14.133 [2024-12-16 21:34:03.587697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:14.133 [2024-12-16 21:34:03.587720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:14.133 [2024-12-16 21:34:03.587735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:14.133 [2024-12-16 21:34:03.587743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:14.133 [2024-12-16 21:34:03.587751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:14.133 [2024-12-16 21:34:03.587759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:14.133 [2024-12-16 21:34:03.587768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:14.133 [2024-12-16 21:34:03.587777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:14.133 [2024-12-16 21:34:03.587795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:14.133 [2024-12-16 21:34:03.587804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587812] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:14.133 [2024-12-16 21:34:03.587821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:14.133 [2024-12-16 21:34:03.587830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:14.133 [2024-12-16 21:34:03.587850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:14.133 [2024-12-16 21:34:03.587858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:14.133 [2024-12-16 21:34:03.587865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:14.133 [2024-12-16 21:34:03.587874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:14.133 [2024-12-16 21:34:03.587881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:14.133 [2024-12-16 21:34:03.587889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:14.133 [2024-12-16 21:34:03.587898] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:14.133 [2024-12-16 21:34:03.587909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:14.133 [2024-12-16 21:34:03.587918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:14.133 [2024-12-16 21:34:03.587927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:14.133 [2024-12-16 21:34:03.587937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:14.133 [2024-12-16 21:34:03.587945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:14.133 [2024-12-16 21:34:03.587953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:14.133 [2024-12-16 21:34:03.587964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:14.133 [2024-12-16 21:34:03.587972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:14.133 [2024-12-16 21:34:03.587980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:14.133 [2024-12-16 21:34:03.587990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:14.133 [2024-12-16 21:34:03.588004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:14.133 [2024-12-16 21:34:03.588013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:14.133 [2024-12-16 21:34:03.588021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:14.133 [2024-12-16 21:34:03.588030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:14.133 [2024-12-16 21:34:03.588037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:14.133 [2024-12-16 21:34:03.588046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:14.133 [2024-12-16 21:34:03.588054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:14.133 [2024-12-16 21:34:03.588064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:14.133 [2024-12-16 21:34:03.588072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:14.133 [2024-12-16 21:34:03.588082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:14.133 [2024-12-16 21:34:03.588091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:14.133 [2024-12-16 21:34:03.588099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.133 [2024-12-16 21:34:03.588111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:14.133 [2024-12-16 21:34:03.588118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:28:14.133 [2024-12-16 21:34:03.588128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.133 [2024-12-16 21:34:03.602320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.133 [2024-12-16 21:34:03.602361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.133 [2024-12-16 21:34:03.602379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.148 ms 00:28:14.133 [2024-12-16 21:34:03.602388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.133 [2024-12-16 21:34:03.602472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.602482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:14.134 [2024-12-16 21:34:03.602491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:14.134 [2024-12-16 21:34:03.602499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.636432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.636503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.134 [2024-12-16 21:34:03.636518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.870 ms 00:28:14.134 [2024-12-16 21:34:03.636527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.636582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.636593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.134 [2024-12-16 21:34:03.636608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:14.134 [2024-12-16 21:34:03.636621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.637234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.637280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.134 [2024-12-16 21:34:03.637294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:28:14.134 [2024-12-16 21:34:03.637304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.637476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.637488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.134 [2024-12-16 21:34:03.637498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:28:14.134 [2024-12-16 21:34:03.637508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.645445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.645492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.134 [2024-12-16 21:34:03.645504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.910 ms 00:28:14.134 [2024-12-16 21:34:03.645524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.649392] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:14.134 [2024-12-16 21:34:03.649442] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:14.134 [2024-12-16 21:34:03.649458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.649466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:14.134 [2024-12-16 21:34:03.649476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.810 ms 00:28:14.134 [2024-12-16 21:34:03.649484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.665975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.666025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:14.134 [2024-12-16 21:34:03.666036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.437 ms 00:28:14.134 [2024-12-16 21:34:03.666045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.669438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.669490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:14.134 [2024-12-16 21:34:03.669500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:28:14.134 [2024-12-16 21:34:03.669507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.672598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.672661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:14.134 [2024-12-16 21:34:03.672671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:28:14.134 [2024-12-16 21:34:03.672689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.673028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.673041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:14.134 [2024-12-16 21:34:03.673054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:28:14.134 [2024-12-16 21:34:03.673065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.700977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.701030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:14.134 [2024-12-16 21:34:03.701041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.895 ms 00:28:14.134 [2024-12-16 21:34:03.701050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.709232] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:14.134 [2024-12-16 21:34:03.712416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.712467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:14.134 [2024-12-16 21:34:03.712480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.313 ms 00:28:14.134 [2024-12-16 21:34:03.712488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.712566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.712577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:14.134 [2024-12-16 21:34:03.712596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:14.134 [2024-12-16 21:34:03.712604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.713387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.713444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:14.134 [2024-12-16 21:34:03.713456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:28:14.134 [2024-12-16 21:34:03.713464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.713500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.713510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:14.134 [2024-12-16 21:34:03.713518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:14.134 [2024-12-16 21:34:03.713526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.713560] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:14.134 [2024-12-16 21:34:03.713572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.713586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:14.134 [2024-12-16 21:34:03.713595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:14.134 [2024-12-16 21:34:03.713602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.719299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.719538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:14.134 [2024-12-16 21:34:03.719558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.678 ms 00:28:14.134 [2024-12-16 21:34:03.719568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.719665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.134 [2024-12-16 21:34:03.719677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:14.134 [2024-12-16 21:34:03.719695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:14.134 [2024-12-16 21:34:03.719703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.134 [2024-12-16 21:34:03.720849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.515 ms, result 0 00:28:15.523  [2024-12-16T21:34:06.165Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-16T21:34:07.109Z] Copying: 33/1024 [MB] (15 MBps) [2024-12-16T21:34:08.060Z] Copying: 53/1024 [MB] (19 MBps) [2024-12-16T21:34:09.000Z] Copying: 64/1024 [MB] (10 MBps) [2024-12-16T21:34:09.942Z] Copying: 75/1024 [MB] (10 MBps) [2024-12-16T21:34:11.323Z] Copying: 85/1024 [MB] (10 MBps) [2024-12-16T21:34:12.259Z] Copying: 98/1024 [MB] (13 MBps) [2024-12-16T21:34:13.202Z] Copying: 115/1024 [MB] (16 MBps) [2024-12-16T21:34:14.144Z] Copying: 132/1024 [MB] (17 MBps) [2024-12-16T21:34:15.076Z] Copying: 145/1024 [MB] (12 MBps) [2024-12-16T21:34:16.035Z] Copying: 161/1024 [MB] (15 MBps) [2024-12-16T21:34:16.973Z] Copying: 175/1024 [MB] (14 MBps) [2024-12-16T21:34:17.911Z] Copying: 188/1024 [MB] (12 MBps) [2024-12-16T21:34:19.296Z] Copying: 199/1024 [MB] (11 MBps) [2024-12-16T21:34:20.239Z] Copying: 218/1024 [MB] (18 MBps) [2024-12-16T21:34:21.181Z] Copying: 234/1024 [MB] (15 MBps) [2024-12-16T21:34:22.123Z] Copying: 246/1024 [MB] (11 MBps) [2024-12-16T21:34:23.064Z] Copying: 256/1024 [MB] (10 MBps) [2024-12-16T21:34:24.005Z] Copying: 269/1024 [MB] (13 MBps) [2024-12-16T21:34:24.947Z] Copying: 288/1024 [MB] (18 MBps) [2024-12-16T21:34:26.332Z] Copying: 304/1024 [MB] (16 MBps) [2024-12-16T21:34:26.902Z] Copying: 316/1024 [MB] (11 MBps) [2024-12-16T21:34:28.283Z] Copying: 330/1024 [MB] (14 MBps) [2024-12-16T21:34:29.224Z] Copying: 343/1024 [MB] (12 MBps) [2024-12-16T21:34:30.164Z] Copying: 360/1024 [MB] (16 MBps) [2024-12-16T21:34:31.097Z] Copying: 370/1024 [MB] (10 MBps) [2024-12-16T21:34:32.032Z] Copying: 384/1024 [MB] (13 MBps) [2024-12-16T21:34:32.975Z] Copying: 400/1024 [MB] (16 MBps) [2024-12-16T21:34:33.918Z] Copying: 412/1024 [MB] (11 MBps) [2024-12-16T21:34:35.292Z] Copying: 423/1024 [MB] (11 MBps) [2024-12-16T21:34:36.231Z] Copying: 436/1024 [MB] (12 MBps) [2024-12-16T21:34:37.173Z] Copying: 449/1024 [MB] (13 MBps) [2024-12-16T21:34:38.109Z] Copying: 459/1024 [MB] (10 MBps) [2024-12-16T21:34:39.052Z] Copying: 473/1024 [MB] (13 MBps) [2024-12-16T21:34:39.990Z] Copying: 484/1024 [MB] (11 MBps) [2024-12-16T21:34:40.929Z] Copying: 496/1024 [MB] (11 MBps) [2024-12-16T21:34:42.352Z] Copying: 509/1024 [MB] (12 MBps) [2024-12-16T21:34:42.933Z] Copying: 519/1024 [MB] (10 MBps) [2024-12-16T21:34:44.317Z] Copying: 530/1024 [MB] (11 MBps) [2024-12-16T21:34:45.260Z] Copying: 542/1024 [MB] (11 MBps) [2024-12-16T21:34:46.200Z] Copying: 554/1024 [MB] (12 MBps) [2024-12-16T21:34:47.144Z] Copying: 566/1024 [MB] (12 MBps) [2024-12-16T21:34:48.080Z] Copying: 581/1024 [MB] (14 MBps) [2024-12-16T21:34:49.017Z] Copying: 593/1024 [MB] (11 MBps) [2024-12-16T21:34:49.961Z] Copying: 607/1024 [MB] (13 MBps) [2024-12-16T21:34:50.907Z] Copying: 618/1024 [MB] (10 MBps) [2024-12-16T21:34:52.289Z] Copying: 631/1024 [MB] (12 MBps) [2024-12-16T21:34:53.232Z] Copying: 644/1024 [MB] (13 MBps) [2024-12-16T21:34:54.172Z] Copying: 657/1024 [MB] (12 MBps) [2024-12-16T21:34:55.111Z] Copying: 670/1024 [MB] (13 MBps) [2024-12-16T21:34:56.052Z] Copying: 682/1024 [MB] (11 MBps) [2024-12-16T21:34:57.050Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-16T21:34:57.984Z] Copying: 707/1024 [MB] (14 MBps) [2024-12-16T21:34:58.919Z] Copying: 723/1024 [MB] (16 MBps) [2024-12-16T21:35:00.304Z] Copying: 737/1024 [MB] (13 MBps) [2024-12-16T21:35:01.239Z] Copying: 753/1024 [MB] (16 MBps) [2024-12-16T21:35:02.178Z] Copying: 768/1024 [MB] (14 MBps) [2024-12-16T21:35:03.119Z] Copying: 781/1024 [MB] (13 MBps) [2024-12-16T21:35:04.053Z] Copying: 791/1024 [MB] (10 MBps) [2024-12-16T21:35:04.996Z] Copying: 805/1024 [MB] (13 MBps) [2024-12-16T21:35:05.940Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-16T21:35:07.319Z] Copying: 827/1024 [MB] (11 MBps) [2024-12-16T21:35:07.941Z] Copying: 846/1024 [MB] (19 MBps) [2024-12-16T21:35:09.320Z] Copying: 866/1024 [MB] (19 MBps) [2024-12-16T21:35:10.258Z] Copying: 877/1024 [MB] (11 MBps) [2024-12-16T21:35:11.192Z] Copying: 889/1024 [MB] (11 MBps) [2024-12-16T21:35:12.135Z] Copying: 906/1024 [MB] (17 MBps) [2024-12-16T21:35:13.075Z] Copying: 923/1024 [MB] (17 MBps) [2024-12-16T21:35:14.014Z] Copying: 935/1024 [MB] (11 MBps) [2024-12-16T21:35:14.956Z] Copying: 945/1024 [MB] (10 MBps) [2024-12-16T21:35:15.901Z] Copying: 958/1024 [MB] (12 MBps) [2024-12-16T21:35:17.282Z] Copying: 969/1024 [MB] (11 MBps) [2024-12-16T21:35:18.224Z] Copying: 981/1024 [MB] (11 MBps) [2024-12-16T21:35:19.169Z] Copying: 993/1024 [MB] (11 MBps) [2024-12-16T21:35:20.110Z] Copying: 1012/1024 [MB] (19 MBps) [2024-12-16T21:35:20.110Z] Copying: 1023/1024 [MB] (11 MBps) [2024-12-16T21:35:20.110Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 21:35:20.066178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.066237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:30.410 [2024-12-16 21:35:20.066255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:30.410 [2024-12-16 21:35:20.066263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.066288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:30.410 [2024-12-16 21:35:20.066734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.066754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:30.410 [2024-12-16 21:35:20.066763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:29:30.410 [2024-12-16 21:35:20.066771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.066995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.067006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:30.410 [2024-12-16 21:35:20.067019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:29:30.410 [2024-12-16 21:35:20.067028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.070471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.070496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:30.410 [2024-12-16 21:35:20.070505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:29:30.410 [2024-12-16 21:35:20.070518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.077361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.077395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:30.410 [2024-12-16 21:35:20.077405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.826 ms 00:29:30.410 [2024-12-16 21:35:20.077422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.078984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.079013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:30.410 [2024-12-16 21:35:20.079021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:29:30.410 [2024-12-16 21:35:20.079027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.083035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.083065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:30.410 [2024-12-16 21:35:20.083073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:29:30.410 [2024-12-16 21:35:20.083079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.087300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.087339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:30.410 [2024-12-16 21:35:20.087348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:29:30.410 [2024-12-16 21:35:20.087360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.089890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.089919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:30.410 [2024-12-16 21:35:20.089926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:29:30.410 [2024-12-16 21:35:20.089932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.092028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.092057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:30.410 [2024-12-16 21:35:20.092065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:29:30.410 [2024-12-16 21:35:20.092071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.093866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.093893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:30.410 [2024-12-16 21:35:20.093900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:29:30.410 [2024-12-16 21:35:20.093906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.095615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.410 [2024-12-16 21:35:20.095650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:30.410 [2024-12-16 21:35:20.095657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.668 ms 00:29:30.410 [2024-12-16 21:35:20.095662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.410 [2024-12-16 21:35:20.095684] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:30.410 [2024-12-16 21:35:20.095695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:30.410 [2024-12-16 21:35:20.095708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:30.410 [2024-12-16 21:35:20.095714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:30.410 [2024-12-16 21:35:20.095921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.095999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:30.411 [2024-12-16 21:35:20.096290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:30.411 [2024-12-16 21:35:20.096295] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04a3e9c6-0984-41f3-a8a4-585bb9630ddb 00:29:30.411 [2024-12-16 21:35:20.096301] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:30.411 [2024-12-16 21:35:20.096306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:30.411 [2024-12-16 21:35:20.096314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:30.411 [2024-12-16 21:35:20.096321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:30.411 [2024-12-16 21:35:20.096326] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:30.411 [2024-12-16 21:35:20.096331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:30.411 [2024-12-16 21:35:20.096341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:30.411 [2024-12-16 21:35:20.096347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:30.411 [2024-12-16 21:35:20.096351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:30.411 [2024-12-16 21:35:20.096357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.411 [2024-12-16 21:35:20.096365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:30.411 [2024-12-16 21:35:20.096374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:29:30.411 [2024-12-16 21:35:20.096379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.411 [2024-12-16 21:35:20.097603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.411 [2024-12-16 21:35:20.097635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:30.411 [2024-12-16 21:35:20.097643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:29:30.411 [2024-12-16 21:35:20.097652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.411 [2024-12-16 21:35:20.097719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.411 [2024-12-16 21:35:20.097727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:30.411 [2024-12-16 21:35:20.097733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:30.411 [2024-12-16 21:35:20.097739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.411 [2024-12-16 21:35:20.102037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.411 [2024-12-16 21:35:20.102060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:30.411 [2024-12-16 21:35:20.102071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.411 [2024-12-16 21:35:20.102077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.411 [2024-12-16 21:35:20.102112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.411 [2024-12-16 21:35:20.102122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:30.411 [2024-12-16 21:35:20.102128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.411 [2024-12-16 21:35:20.102134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.411 [2024-12-16 21:35:20.102163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.411 [2024-12-16 21:35:20.102172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:30.411 [2024-12-16 21:35:20.102178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.411 [2024-12-16 21:35:20.102185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.411 [2024-12-16 21:35:20.102197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.411 [2024-12-16 21:35:20.102204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:30.412 [2024-12-16 21:35:20.102210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.412 [2024-12-16 21:35:20.102215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.412 [2024-12-16 21:35:20.109848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.412 [2024-12-16 21:35:20.109881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:30.672 [2024-12-16 21:35:20.109895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.109901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.115897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.672 [2024-12-16 21:35:20.115928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:30.672 [2024-12-16 21:35:20.115935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.115941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.115973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.672 [2024-12-16 21:35:20.115980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:30.672 [2024-12-16 21:35:20.115987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.115992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.116016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.672 [2024-12-16 21:35:20.116022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:30.672 [2024-12-16 21:35:20.116031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.116041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.116099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.672 [2024-12-16 21:35:20.116107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:30.672 [2024-12-16 21:35:20.116112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.116118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.116142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.672 [2024-12-16 21:35:20.116154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:30.672 [2024-12-16 21:35:20.116161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.116168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.116195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.672 [2024-12-16 21:35:20.116202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:30.672 [2024-12-16 21:35:20.116208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.116214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.116247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:30.672 [2024-12-16 21:35:20.116260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:30.672 [2024-12-16 21:35:20.116266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:30.672 [2024-12-16 21:35:20.116277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.672 [2024-12-16 21:35:20.116367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.173 ms, result 0 00:29:30.672 00:29:30.672 00:29:30.672 21:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:33.218 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 93174 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93174 ']' 00:29:33.218 Process with pid 93174 is not found 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 93174 00:29:33.218 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (93174) - No such process 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 93174 is not found' 00:29:33.218 21:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:33.476 Remove shared memory files 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:33.476 00:29:33.476 real 4m36.651s 00:29:33.476 user 4m54.661s 00:29:33.476 sys 0m24.771s 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.476 ************************************ 00:29:33.476 END TEST ftl_dirty_shutdown 00:29:33.476 21:35:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.476 ************************************ 00:29:33.476 21:35:23 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:33.476 21:35:23 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:33.476 21:35:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.476 21:35:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:33.476 ************************************ 00:29:33.476 START TEST ftl_upgrade_shutdown 00:29:33.476 ************************************ 00:29:33.476 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:33.737 * Looking for test storage... 00:29:33.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.737 --rc genhtml_branch_coverage=1 00:29:33.737 --rc genhtml_function_coverage=1 00:29:33.737 --rc genhtml_legend=1 00:29:33.737 --rc geninfo_all_blocks=1 00:29:33.737 --rc geninfo_unexecuted_blocks=1 00:29:33.737 00:29:33.737 ' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.737 --rc genhtml_branch_coverage=1 00:29:33.737 --rc genhtml_function_coverage=1 00:29:33.737 --rc genhtml_legend=1 00:29:33.737 --rc geninfo_all_blocks=1 00:29:33.737 --rc geninfo_unexecuted_blocks=1 00:29:33.737 00:29:33.737 ' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.737 --rc genhtml_branch_coverage=1 00:29:33.737 --rc genhtml_function_coverage=1 00:29:33.737 --rc genhtml_legend=1 00:29:33.737 --rc geninfo_all_blocks=1 00:29:33.737 --rc geninfo_unexecuted_blocks=1 00:29:33.737 00:29:33.737 ' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:33.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.737 --rc genhtml_branch_coverage=1 00:29:33.737 --rc genhtml_function_coverage=1 00:29:33.737 --rc genhtml_legend=1 00:29:33.737 --rc geninfo_all_blocks=1 00:29:33.737 --rc geninfo_unexecuted_blocks=1 00:29:33.737 00:29:33.737 ' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.737 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96140 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96140 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96140 ']' 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:33.738 21:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.738 [2024-12-16 21:35:23.377462] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:33.738 [2024-12-16 21:35:23.377661] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96140 ] 00:29:33.997 [2024-12-16 21:35:23.528066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.997 [2024-12-16 21:35:23.553259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:34.568 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:34.827 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:35.086 { 00:29:35.086 "name": "basen1", 00:29:35.086 "aliases": [ 00:29:35.086 "18351505-cbcb-49b5-805a-cf4a2e0bfde4" 00:29:35.086 ], 00:29:35.086 "product_name": "NVMe disk", 00:29:35.086 "block_size": 4096, 00:29:35.086 "num_blocks": 1310720, 00:29:35.086 "uuid": "18351505-cbcb-49b5-805a-cf4a2e0bfde4", 00:29:35.086 "numa_id": -1, 00:29:35.086 "assigned_rate_limits": { 00:29:35.086 "rw_ios_per_sec": 0, 00:29:35.086 "rw_mbytes_per_sec": 0, 00:29:35.086 "r_mbytes_per_sec": 0, 00:29:35.086 "w_mbytes_per_sec": 0 00:29:35.086 }, 00:29:35.086 "claimed": true, 00:29:35.086 "claim_type": "read_many_write_one", 00:29:35.086 "zoned": false, 00:29:35.086 "supported_io_types": { 00:29:35.086 "read": true, 00:29:35.086 "write": true, 00:29:35.086 "unmap": true, 00:29:35.086 "flush": true, 00:29:35.086 "reset": true, 00:29:35.086 "nvme_admin": true, 00:29:35.086 "nvme_io": true, 00:29:35.086 "nvme_io_md": false, 00:29:35.086 "write_zeroes": true, 00:29:35.086 "zcopy": false, 00:29:35.086 "get_zone_info": false, 00:29:35.086 "zone_management": false, 00:29:35.086 "zone_append": false, 00:29:35.086 "compare": true, 00:29:35.086 "compare_and_write": false, 00:29:35.086 "abort": true, 00:29:35.086 "seek_hole": false, 00:29:35.086 "seek_data": false, 00:29:35.086 "copy": true, 00:29:35.086 "nvme_iov_md": false 00:29:35.086 }, 00:29:35.086 "driver_specific": { 00:29:35.086 "nvme": [ 00:29:35.086 { 00:29:35.086 "pci_address": "0000:00:11.0", 00:29:35.086 "trid": { 00:29:35.086 "trtype": "PCIe", 00:29:35.086 "traddr": "0000:00:11.0" 00:29:35.086 }, 00:29:35.086 "ctrlr_data": { 00:29:35.086 "cntlid": 0, 00:29:35.086 "vendor_id": "0x1b36", 00:29:35.086 "model_number": "QEMU NVMe Ctrl", 00:29:35.086 "serial_number": "12341", 00:29:35.086 "firmware_revision": "8.0.0", 00:29:35.086 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:35.086 "oacs": { 00:29:35.086 "security": 0, 00:29:35.086 "format": 1, 00:29:35.086 "firmware": 0, 00:29:35.086 "ns_manage": 1 00:29:35.086 }, 00:29:35.086 "multi_ctrlr": false, 00:29:35.086 "ana_reporting": false 00:29:35.086 }, 00:29:35.086 "vs": { 00:29:35.086 "nvme_version": "1.4" 00:29:35.086 }, 00:29:35.086 "ns_data": { 00:29:35.086 "id": 1, 00:29:35.086 "can_share": false 00:29:35.086 } 00:29:35.086 } 00:29:35.086 ], 00:29:35.086 "mp_policy": "active_passive" 00:29:35.086 } 00:29:35.086 } 00:29:35.086 ]' 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:35.086 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:35.345 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=81ed7b19-2d4d-4d7a-875e-b954b5314518 00:29:35.345 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:35.345 21:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81ed7b19-2d4d-4d7a-875e-b954b5314518 00:29:35.606 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0ec4f3eb-7701-4e0f-8bc9-c23457528cde 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0ec4f3eb-7701-4e0f-8bc9-c23457528cde 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=e76af1d0-2c31-4d38-90c4-f82e0f89ca60 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z e76af1d0-2c31-4d38-90c4-f82e0f89ca60 ]] 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 e76af1d0-2c31-4d38-90c4-f82e0f89ca60 5120 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=e76af1d0-2c31-4d38-90c4-f82e0f89ca60 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size e76af1d0-2c31-4d38-90c4-f82e0f89ca60 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e76af1d0-2c31-4d38-90c4-f82e0f89ca60 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:35.870 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e76af1d0-2c31-4d38-90c4-f82e0f89ca60 00:29:36.128 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:36.128 { 00:29:36.128 "name": "e76af1d0-2c31-4d38-90c4-f82e0f89ca60", 00:29:36.128 "aliases": [ 00:29:36.128 "lvs/basen1p0" 00:29:36.128 ], 00:29:36.128 "product_name": "Logical Volume", 00:29:36.128 "block_size": 4096, 00:29:36.128 "num_blocks": 5242880, 00:29:36.128 "uuid": "e76af1d0-2c31-4d38-90c4-f82e0f89ca60", 00:29:36.128 "assigned_rate_limits": { 00:29:36.128 "rw_ios_per_sec": 0, 00:29:36.128 "rw_mbytes_per_sec": 0, 00:29:36.128 "r_mbytes_per_sec": 0, 00:29:36.128 "w_mbytes_per_sec": 0 00:29:36.128 }, 00:29:36.128 "claimed": false, 00:29:36.128 "zoned": false, 00:29:36.128 "supported_io_types": { 00:29:36.128 "read": true, 00:29:36.128 "write": true, 00:29:36.128 "unmap": true, 00:29:36.128 "flush": false, 00:29:36.128 "reset": true, 00:29:36.128 "nvme_admin": false, 00:29:36.128 "nvme_io": false, 00:29:36.128 "nvme_io_md": false, 00:29:36.128 "write_zeroes": true, 00:29:36.128 "zcopy": false, 00:29:36.128 "get_zone_info": false, 00:29:36.128 "zone_management": false, 00:29:36.128 "zone_append": false, 00:29:36.128 "compare": false, 00:29:36.128 "compare_and_write": false, 00:29:36.128 "abort": false, 00:29:36.128 "seek_hole": true, 00:29:36.128 "seek_data": true, 00:29:36.128 "copy": false, 00:29:36.128 "nvme_iov_md": false 00:29:36.128 }, 00:29:36.128 "driver_specific": { 00:29:36.128 "lvol": { 00:29:36.128 "lvol_store_uuid": "0ec4f3eb-7701-4e0f-8bc9-c23457528cde", 00:29:36.128 "base_bdev": "basen1", 00:29:36.128 "thin_provision": true, 00:29:36.128 "num_allocated_clusters": 0, 00:29:36.128 "snapshot": false, 00:29:36.128 "clone": false, 00:29:36.128 "esnap_clone": false 00:29:36.128 } 00:29:36.128 } 00:29:36.128 } 00:29:36.128 ]' 00:29:36.128 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:36.128 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:36.128 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:36.387 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:36.387 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:36.387 21:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:36.387 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:36.387 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:36.387 21:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:36.645 21:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:36.645 21:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:36.645 21:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:36.645 21:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:36.645 21:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:36.645 21:35:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d e76af1d0-2c31-4d38-90c4-f82e0f89ca60 -c cachen1p0 --l2p_dram_limit 2 00:29:36.904 [2024-12-16 21:35:26.502955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.502993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:36.904 [2024-12-16 21:35:26.503004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:36.904 [2024-12-16 21:35:26.503014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.503050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.503059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:36.904 [2024-12-16 21:35:26.503069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:36.904 [2024-12-16 21:35:26.503077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.503091] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:36.904 [2024-12-16 21:35:26.503272] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:36.904 [2024-12-16 21:35:26.503312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.503320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:36.904 [2024-12-16 21:35:26.503326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.224 ms 00:29:36.904 [2024-12-16 21:35:26.503333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.503353] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e5177778-d1bf-4c88-a8eb-be7a0f2b0625 00:29:36.904 [2024-12-16 21:35:26.504286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.504308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:36.904 [2024-12-16 21:35:26.504319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:36.904 [2024-12-16 21:35:26.504325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.509008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.509035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:36.904 [2024-12-16 21:35:26.509044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.646 ms 00:29:36.904 [2024-12-16 21:35:26.509053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.509120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.509140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:36.904 [2024-12-16 21:35:26.509149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:36.904 [2024-12-16 21:35:26.509155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.509183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.509190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:36.904 [2024-12-16 21:35:26.509198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:36.904 [2024-12-16 21:35:26.509203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.509220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:36.904 [2024-12-16 21:35:26.510457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.510487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:36.904 [2024-12-16 21:35:26.510495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.242 ms 00:29:36.904 [2024-12-16 21:35:26.510502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.510523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.510531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:36.904 [2024-12-16 21:35:26.510538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:36.904 [2024-12-16 21:35:26.510546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.510565] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:36.904 [2024-12-16 21:35:26.510690] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:36.904 [2024-12-16 21:35:26.510700] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:36.904 [2024-12-16 21:35:26.510712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:36.904 [2024-12-16 21:35:26.510719] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:36.904 [2024-12-16 21:35:26.510729] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:36.904 [2024-12-16 21:35:26.510735] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:36.904 [2024-12-16 21:35:26.510743] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:36.904 [2024-12-16 21:35:26.510749] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:36.904 [2024-12-16 21:35:26.510756] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:36.904 [2024-12-16 21:35:26.510762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.510769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:36.904 [2024-12-16 21:35:26.510776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:29:36.904 [2024-12-16 21:35:26.510783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.510848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.904 [2024-12-16 21:35:26.510863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:36.904 [2024-12-16 21:35:26.510870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:36.904 [2024-12-16 21:35:26.510878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.904 [2024-12-16 21:35:26.510951] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:36.904 [2024-12-16 21:35:26.510960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:36.904 [2024-12-16 21:35:26.510966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:36.904 [2024-12-16 21:35:26.510974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.904 [2024-12-16 21:35:26.510980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:36.904 [2024-12-16 21:35:26.510986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:36.904 [2024-12-16 21:35:26.510992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:36.904 [2024-12-16 21:35:26.510998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:36.905 [2024-12-16 21:35:26.511004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:36.905 [2024-12-16 21:35:26.511011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:36.905 [2024-12-16 21:35:26.511022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:36.905 [2024-12-16 21:35:26.511027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:36.905 [2024-12-16 21:35:26.511042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:36.905 [2024-12-16 21:35:26.511049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:36.905 [2024-12-16 21:35:26.511061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:36.905 [2024-12-16 21:35:26.511066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:36.905 [2024-12-16 21:35:26.511077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:36.905 [2024-12-16 21:35:26.511084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:36.905 [2024-12-16 21:35:26.511088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:36.905 [2024-12-16 21:35:26.511095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:36.905 [2024-12-16 21:35:26.511100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:36.905 [2024-12-16 21:35:26.511106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:36.905 [2024-12-16 21:35:26.511112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:36.905 [2024-12-16 21:35:26.511119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:36.905 [2024-12-16 21:35:26.511125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:36.905 [2024-12-16 21:35:26.511134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:36.905 [2024-12-16 21:35:26.511140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:36.905 [2024-12-16 21:35:26.511147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:36.905 [2024-12-16 21:35:26.511152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:36.905 [2024-12-16 21:35:26.511159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:36.905 [2024-12-16 21:35:26.511171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:36.905 [2024-12-16 21:35:26.511177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:36.905 [2024-12-16 21:35:26.511190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:36.905 [2024-12-16 21:35:26.511210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:36.905 [2024-12-16 21:35:26.511215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511222] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:36.905 [2024-12-16 21:35:26.511228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:36.905 [2024-12-16 21:35:26.511238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:36.905 [2024-12-16 21:35:26.511245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:36.905 [2024-12-16 21:35:26.511253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:36.905 [2024-12-16 21:35:26.511263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:36.905 [2024-12-16 21:35:26.511270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:36.905 [2024-12-16 21:35:26.511276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:36.905 [2024-12-16 21:35:26.511283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:36.905 [2024-12-16 21:35:26.511288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:36.905 [2024-12-16 21:35:26.511297] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:36.905 [2024-12-16 21:35:26.511307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:36.905 [2024-12-16 21:35:26.511322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:36.905 [2024-12-16 21:35:26.511343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:36.905 [2024-12-16 21:35:26.511350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:36.905 [2024-12-16 21:35:26.511358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:36.905 [2024-12-16 21:35:26.511365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:36.905 [2024-12-16 21:35:26.511413] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:36.905 [2024-12-16 21:35:26.511423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:36.905 [2024-12-16 21:35:26.511437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:36.905 [2024-12-16 21:35:26.511444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:36.905 [2024-12-16 21:35:26.511451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:36.905 [2024-12-16 21:35:26.511458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.905 [2024-12-16 21:35:26.511464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:36.905 [2024-12-16 21:35:26.511474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.556 ms 00:29:36.905 [2024-12-16 21:35:26.511480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.905 [2024-12-16 21:35:26.511512] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:36.905 [2024-12-16 21:35:26.511519] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:41.106 [2024-12-16 21:35:30.243235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.243327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:41.106 [2024-12-16 21:35:30.243354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3731.701 ms 00:29:41.106 [2024-12-16 21:35:30.243363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.258203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.258257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:41.106 [2024-12-16 21:35:30.258275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.710 ms 00:29:41.106 [2024-12-16 21:35:30.258283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.258378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.258389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:41.106 [2024-12-16 21:35:30.258401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:41.106 [2024-12-16 21:35:30.258414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.271591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.271663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:41.106 [2024-12-16 21:35:30.271678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.139 ms 00:29:41.106 [2024-12-16 21:35:30.271690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.271730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.271739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:41.106 [2024-12-16 21:35:30.271751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:41.106 [2024-12-16 21:35:30.271758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.272338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.272382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:41.106 [2024-12-16 21:35:30.272396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:29:41.106 [2024-12-16 21:35:30.272405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.272461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.272471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:41.106 [2024-12-16 21:35:30.272483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:41.106 [2024-12-16 21:35:30.272491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.281543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.281588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:41.106 [2024-12-16 21:35:30.281601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.025 ms 00:29:41.106 [2024-12-16 21:35:30.281609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.303032] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:41.106 [2024-12-16 21:35:30.304424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.304478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:41.106 [2024-12-16 21:35:30.304493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.722 ms 00:29:41.106 [2024-12-16 21:35:30.304505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.325276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.325337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:41.106 [2024-12-16 21:35:30.325353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.718 ms 00:29:41.106 [2024-12-16 21:35:30.325367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.325471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.325485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:41.106 [2024-12-16 21:35:30.325495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:41.106 [2024-12-16 21:35:30.325505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.330674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.330726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:41.106 [2024-12-16 21:35:30.330741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.118 ms 00:29:41.106 [2024-12-16 21:35:30.330752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.335762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.335813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:41.106 [2024-12-16 21:35:30.335823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.956 ms 00:29:41.106 [2024-12-16 21:35:30.335832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.336147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.336198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:41.106 [2024-12-16 21:35:30.336209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:29:41.106 [2024-12-16 21:35:30.336223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.386511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.386577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:41.106 [2024-12-16 21:35:30.386596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.248 ms 00:29:41.106 [2024-12-16 21:35:30.386606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.393749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.393805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:41.106 [2024-12-16 21:35:30.393815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.053 ms 00:29:41.106 [2024-12-16 21:35:30.393831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.399896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.399950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:41.106 [2024-12-16 21:35:30.399960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.016 ms 00:29:41.106 [2024-12-16 21:35:30.399970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.406186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.406243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:41.106 [2024-12-16 21:35:30.406255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.169 ms 00:29:41.106 [2024-12-16 21:35:30.406268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.406326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.406339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:41.106 [2024-12-16 21:35:30.406348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:41.106 [2024-12-16 21:35:30.406358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.406444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.106 [2024-12-16 21:35:30.406458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:41.106 [2024-12-16 21:35:30.406467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:41.106 [2024-12-16 21:35:30.406479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.106 [2024-12-16 21:35:30.408510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3905.058 ms, result 0 00:29:41.106 { 00:29:41.106 "name": "ftl", 00:29:41.106 "uuid": "e5177778-d1bf-4c88-a8eb-be7a0f2b0625" 00:29:41.106 } 00:29:41.106 21:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:41.107 [2024-12-16 21:35:30.622799] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.107 21:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:41.367 21:35:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:41.367 [2024-12-16 21:35:31.031266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:41.367 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:41.628 [2024-12-16 21:35:31.255725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:41.628 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:42.198 Fill FTL, iteration 1 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=96261 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 96261 /var/tmp/spdk.tgt.sock 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96261 ']' 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:42.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.198 21:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.198 [2024-12-16 21:35:31.709763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:42.198 [2024-12-16 21:35:31.709917] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96261 ] 00:29:42.198 [2024-12-16 21:35:31.854387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.198 [2024-12-16 21:35:31.882114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.132 21:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.132 21:35:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:43.132 21:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:43.132 ftln1 00:29:43.132 21:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:43.132 21:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 96261 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 96261 ']' 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 96261 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96261 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.390 killing process with pid 96261 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96261' 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 96261 00:29:43.390 21:35:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 96261 00:29:43.648 21:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:43.648 21:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:43.906 [2024-12-16 21:35:33.362916] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:43.906 [2024-12-16 21:35:33.363026] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96294 ] 00:29:43.906 [2024-12-16 21:35:33.507263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.906 [2024-12-16 21:35:33.524872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.330  [2024-12-16T21:35:35.969Z] Copying: 201/1024 [MB] (201 MBps) [2024-12-16T21:35:36.914Z] Copying: 439/1024 [MB] (238 MBps) [2024-12-16T21:35:37.855Z] Copying: 616/1024 [MB] (177 MBps) [2024-12-16T21:35:38.795Z] Copying: 837/1024 [MB] (221 MBps) [2024-12-16T21:35:38.795Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:29:49.095 00:29:49.095 Calculate MD5 checksum, iteration 1 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:49.095 21:35:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:49.353 [2024-12-16 21:35:38.812091] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:49.353 [2024-12-16 21:35:38.812201] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96353 ] 00:29:49.353 [2024-12-16 21:35:38.951612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.353 [2024-12-16 21:35:38.973496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.733  [2024-12-16T21:35:41.003Z] Copying: 609/1024 [MB] (609 MBps) [2024-12-16T21:35:41.263Z] Copying: 1024/1024 [MB] (average 605 MBps) 00:29:51.563 00:29:51.563 21:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:51.563 21:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:53.472 Fill FTL, iteration 2 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=91f191be75d1dcecb84c785913f2f42e 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:53.472 21:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:53.732 [2024-12-16 21:35:43.233975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:53.732 [2024-12-16 21:35:43.234094] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96403 ] 00:29:53.732 [2024-12-16 21:35:43.379913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.732 [2024-12-16 21:35:43.398515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.111  [2024-12-16T21:35:45.743Z] Copying: 201/1024 [MB] (201 MBps) [2024-12-16T21:35:46.677Z] Copying: 454/1024 [MB] (253 MBps) [2024-12-16T21:35:47.612Z] Copying: 687/1024 [MB] (233 MBps) [2024-12-16T21:35:48.180Z] Copying: 923/1024 [MB] (236 MBps) [2024-12-16T21:35:48.180Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:29:58.480 00:29:58.480 Calculate MD5 checksum, iteration 2 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:58.480 21:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:58.739 [2024-12-16 21:35:48.189070] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:58.739 [2024-12-16 21:35:48.189213] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96456 ] 00:29:58.739 [2024-12-16 21:35:48.331316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.739 [2024-12-16 21:35:48.349683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.114  [2024-12-16T21:35:50.381Z] Copying: 632/1024 [MB] (632 MBps) [2024-12-16T21:35:50.950Z] Copying: 1024/1024 [MB] (average 645 MBps) 00:30:01.250 00:30:01.250 21:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:01.250 21:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:03.791 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:03.791 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b6a133d0292bc8038064e9beaac06260 00:30:03.791 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:03.791 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:03.791 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:03.791 [2024-12-16 21:35:53.243318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.791 [2024-12-16 21:35:53.243362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:03.791 [2024-12-16 21:35:53.243375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:03.791 [2024-12-16 21:35:53.243385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.791 [2024-12-16 21:35:53.243404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.791 [2024-12-16 21:35:53.243412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:03.791 [2024-12-16 21:35:53.243420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:03.791 [2024-12-16 21:35:53.243431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.791 [2024-12-16 21:35:53.243447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.791 [2024-12-16 21:35:53.243454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:03.791 [2024-12-16 21:35:53.243463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:03.791 [2024-12-16 21:35:53.243472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.792 [2024-12-16 21:35:53.243527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.198 ms, result 0 00:30:03.792 true 00:30:03.792 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:03.792 { 00:30:03.792 "name": "ftl", 00:30:03.792 "properties": [ 00:30:03.792 { 00:30:03.792 "name": "superblock_version", 00:30:03.792 "value": 5, 00:30:03.792 "read-only": true 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "name": "base_device", 00:30:03.792 "bands": [ 00:30:03.792 { 00:30:03.792 "id": 0, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 1, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 2, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 3, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 4, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 5, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 6, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 7, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 8, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 9, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 10, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 11, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 12, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 13, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 14, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 15, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 16, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 17, 00:30:03.792 "state": "FREE", 00:30:03.792 "validity": 0.0 00:30:03.792 } 00:30:03.792 ], 00:30:03.792 "read-only": true 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "name": "cache_device", 00:30:03.792 "type": "bdev", 00:30:03.792 "chunks": [ 00:30:03.792 { 00:30:03.792 "id": 0, 00:30:03.792 "state": "INACTIVE", 00:30:03.792 "utilization": 0.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 1, 00:30:03.792 "state": "CLOSED", 00:30:03.792 "utilization": 1.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 2, 00:30:03.792 "state": "CLOSED", 00:30:03.792 "utilization": 1.0 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 3, 00:30:03.792 "state": "OPEN", 00:30:03.792 "utilization": 0.001953125 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "id": 4, 00:30:03.792 "state": "OPEN", 00:30:03.792 "utilization": 0.0 00:30:03.792 } 00:30:03.792 ], 00:30:03.792 "read-only": true 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "name": "verbose_mode", 00:30:03.792 "value": true, 00:30:03.792 "unit": "", 00:30:03.792 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:03.792 }, 00:30:03.792 { 00:30:03.792 "name": "prep_upgrade_on_shutdown", 00:30:03.792 "value": false, 00:30:03.792 "unit": "", 00:30:03.792 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:03.792 } 00:30:03.792 ] 00:30:03.792 } 00:30:03.792 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:04.052 [2024-12-16 21:35:53.659660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.052 [2024-12-16 21:35:53.659692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:04.052 [2024-12-16 21:35:53.659702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:04.052 [2024-12-16 21:35:53.659708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.052 [2024-12-16 21:35:53.659725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.052 [2024-12-16 21:35:53.659732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:04.052 [2024-12-16 21:35:53.659739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:04.052 [2024-12-16 21:35:53.659745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.052 [2024-12-16 21:35:53.659760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.052 [2024-12-16 21:35:53.659767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:04.052 [2024-12-16 21:35:53.659772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:04.052 [2024-12-16 21:35:53.659778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.052 [2024-12-16 21:35:53.659826] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.167 ms, result 0 00:30:04.052 true 00:30:04.052 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:04.052 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:04.052 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:04.311 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:04.311 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:04.311 21:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:04.571 [2024-12-16 21:35:54.068000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.571 [2024-12-16 21:35:54.068031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:04.571 [2024-12-16 21:35:54.068038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:04.571 [2024-12-16 21:35:54.068045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.571 [2024-12-16 21:35:54.068061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.571 [2024-12-16 21:35:54.068068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:04.571 [2024-12-16 21:35:54.068074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:04.571 [2024-12-16 21:35:54.068079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.571 [2024-12-16 21:35:54.068094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.571 [2024-12-16 21:35:54.068100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:04.571 [2024-12-16 21:35:54.068105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:04.571 [2024-12-16 21:35:54.068110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.571 [2024-12-16 21:35:54.068149] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.138 ms, result 0 00:30:04.571 true 00:30:04.571 21:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:04.832 { 00:30:04.832 "name": "ftl", 00:30:04.832 "properties": [ 00:30:04.832 { 00:30:04.832 "name": "superblock_version", 00:30:04.832 "value": 5, 00:30:04.832 "read-only": true 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "name": "base_device", 00:30:04.832 "bands": [ 00:30:04.832 { 00:30:04.832 "id": 0, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 1, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 2, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 3, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 4, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 5, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 6, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 7, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 8, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 9, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 10, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 11, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 12, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 13, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 14, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 15, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 16, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 17, 00:30:04.832 "state": "FREE", 00:30:04.832 "validity": 0.0 00:30:04.832 } 00:30:04.832 ], 00:30:04.832 "read-only": true 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "name": "cache_device", 00:30:04.832 "type": "bdev", 00:30:04.832 "chunks": [ 00:30:04.832 { 00:30:04.832 "id": 0, 00:30:04.832 "state": "INACTIVE", 00:30:04.832 "utilization": 0.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 1, 00:30:04.832 "state": "CLOSED", 00:30:04.832 "utilization": 1.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 2, 00:30:04.832 "state": "CLOSED", 00:30:04.832 "utilization": 1.0 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 3, 00:30:04.832 "state": "OPEN", 00:30:04.832 "utilization": 0.001953125 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "id": 4, 00:30:04.832 "state": "OPEN", 00:30:04.832 "utilization": 0.0 00:30:04.832 } 00:30:04.832 ], 00:30:04.832 "read-only": true 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "name": "verbose_mode", 00:30:04.832 "value": true, 00:30:04.832 "unit": "", 00:30:04.832 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:04.832 }, 00:30:04.832 { 00:30:04.832 "name": "prep_upgrade_on_shutdown", 00:30:04.832 "value": true, 00:30:04.832 "unit": "", 00:30:04.832 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:04.832 } 00:30:04.832 ] 00:30:04.832 } 00:30:04.832 21:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:04.832 21:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96140 ]] 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96140 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 96140 ']' 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 96140 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96140 00:30:04.833 killing process with pid 96140 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96140' 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 96140 00:30:04.833 21:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 96140 00:30:04.833 [2024-12-16 21:35:54.431950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:04.833 [2024-12-16 21:35:54.435979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.833 [2024-12-16 21:35:54.436013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:04.833 [2024-12-16 21:35:54.436024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:04.833 [2024-12-16 21:35:54.436031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.833 [2024-12-16 21:35:54.436049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:04.833 [2024-12-16 21:35:54.436560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.833 [2024-12-16 21:35:54.436588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:04.833 [2024-12-16 21:35:54.436596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.501 ms 00:30:04.833 [2024-12-16 21:35:54.436603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.629557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.629635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:12.971 [2024-12-16 21:36:02.629648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8192.900 ms 00:30:12.971 [2024-12-16 21:36:02.629656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.630718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.630738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:12.971 [2024-12-16 21:36:02.630746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.048 ms 00:30:12.971 [2024-12-16 21:36:02.630752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.631597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.631633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:12.971 [2024-12-16 21:36:02.631642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:30:12.971 [2024-12-16 21:36:02.631651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.633584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.633617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:12.971 [2024-12-16 21:36:02.633637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.895 ms 00:30:12.971 [2024-12-16 21:36:02.633643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.635846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.635876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:12.971 [2024-12-16 21:36:02.635885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.177 ms 00:30:12.971 [2024-12-16 21:36:02.635896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.635952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.635961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:12.971 [2024-12-16 21:36:02.635967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:12.971 [2024-12-16 21:36:02.635973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.637270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.637300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:12.971 [2024-12-16 21:36:02.637307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.284 ms 00:30:12.971 [2024-12-16 21:36:02.637313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.638670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.638698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:12.971 [2024-12-16 21:36:02.638706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.333 ms 00:30:12.971 [2024-12-16 21:36:02.638711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.639821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.639851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:12.971 [2024-12-16 21:36:02.639858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.086 ms 00:30:12.971 [2024-12-16 21:36:02.639864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.640998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.971 [2024-12-16 21:36:02.641025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:12.971 [2024-12-16 21:36:02.641032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.088 ms 00:30:12.971 [2024-12-16 21:36:02.641038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.971 [2024-12-16 21:36:02.641061] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:12.971 [2024-12-16 21:36:02.641073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:12.971 [2024-12-16 21:36:02.641081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:12.971 [2024-12-16 21:36:02.641087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:12.971 [2024-12-16 21:36:02.641094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:12.971 [2024-12-16 21:36:02.641101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:12.971 [2024-12-16 21:36:02.641106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:12.971 [2024-12-16 21:36:02.641113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:12.971 [2024-12-16 21:36:02.641119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:12.972 [2024-12-16 21:36:02.641200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:12.972 [2024-12-16 21:36:02.641209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e5177778-d1bf-4c88-a8eb-be7a0f2b0625 00:30:12.972 [2024-12-16 21:36:02.641216] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:12.972 [2024-12-16 21:36:02.641225] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:12.972 [2024-12-16 21:36:02.641232] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:12.972 [2024-12-16 21:36:02.641238] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:12.972 [2024-12-16 21:36:02.641243] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:12.972 [2024-12-16 21:36:02.641251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:12.972 [2024-12-16 21:36:02.641257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:12.972 [2024-12-16 21:36:02.641262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:12.972 [2024-12-16 21:36:02.641268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:12.972 [2024-12-16 21:36:02.641274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.972 [2024-12-16 21:36:02.641282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:12.972 [2024-12-16 21:36:02.641289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:30:12.972 [2024-12-16 21:36:02.641296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.643068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.972 [2024-12-16 21:36:02.643096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:12.972 [2024-12-16 21:36:02.643104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.759 ms 00:30:12.972 [2024-12-16 21:36:02.643114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.643200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.972 [2024-12-16 21:36:02.643209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:12.972 [2024-12-16 21:36:02.643216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:30:12.972 [2024-12-16 21:36:02.643222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.649311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.649347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:12.972 [2024-12-16 21:36:02.649355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.649362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.649387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.649395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:12.972 [2024-12-16 21:36:02.649401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.649408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.649450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.649458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:12.972 [2024-12-16 21:36:02.649465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.649471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.649485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.649492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:12.972 [2024-12-16 21:36:02.649501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.649507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.660888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.660925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:12.972 [2024-12-16 21:36:02.660934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.660940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.669651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.669687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:12.972 [2024-12-16 21:36:02.669696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.669703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.669762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.669775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:12.972 [2024-12-16 21:36:02.669782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.669789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.669815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.669824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:12.972 [2024-12-16 21:36:02.669830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.669837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.669895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.669903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:12.972 [2024-12-16 21:36:02.669916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.669922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.669951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.669965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:12.972 [2024-12-16 21:36:02.669972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.669979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.670016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.670025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:12.972 [2024-12-16 21:36:02.670035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.670042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.670084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:12.972 [2024-12-16 21:36:02.670104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:12.972 [2024-12-16 21:36:02.670111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:12.972 [2024-12-16 21:36:02.670117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.972 [2024-12-16 21:36:02.670230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8234.194 ms, result 0 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96633 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96633 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96633 ']' 00:30:17.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.199 21:36:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:17.199 [2024-12-16 21:36:06.715676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:17.199 [2024-12-16 21:36:06.715797] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96633 ] 00:30:17.199 [2024-12-16 21:36:06.857963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.199 [2024-12-16 21:36:06.884941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.781 [2024-12-16 21:36:07.185082] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:17.781 [2024-12-16 21:36:07.185146] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:17.781 [2024-12-16 21:36:07.332458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.332509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:17.781 [2024-12-16 21:36:07.332524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:17.781 [2024-12-16 21:36:07.332532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.332581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.332591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:17.781 [2024-12-16 21:36:07.332602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:17.781 [2024-12-16 21:36:07.332609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.332654] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:17.781 [2024-12-16 21:36:07.332901] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:17.781 [2024-12-16 21:36:07.332916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.332924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:17.781 [2024-12-16 21:36:07.332933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:30:17.781 [2024-12-16 21:36:07.332940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.334092] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:17.781 [2024-12-16 21:36:07.336821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.336866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:17.781 [2024-12-16 21:36:07.336876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.731 ms 00:30:17.781 [2024-12-16 21:36:07.336883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.336938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.336948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:17.781 [2024-12-16 21:36:07.336956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:17.781 [2024-12-16 21:36:07.336963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.342111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.342147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:17.781 [2024-12-16 21:36:07.342156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.087 ms 00:30:17.781 [2024-12-16 21:36:07.342166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.342212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.342221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:17.781 [2024-12-16 21:36:07.342228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:17.781 [2024-12-16 21:36:07.342236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.342272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.342283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:17.781 [2024-12-16 21:36:07.342291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:17.781 [2024-12-16 21:36:07.342298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.781 [2024-12-16 21:36:07.342321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:17.781 [2024-12-16 21:36:07.343735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.781 [2024-12-16 21:36:07.343761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:17.782 [2024-12-16 21:36:07.343770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.420 ms 00:30:17.782 [2024-12-16 21:36:07.343777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.782 [2024-12-16 21:36:07.343807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.782 [2024-12-16 21:36:07.343818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:17.782 [2024-12-16 21:36:07.343826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:17.782 [2024-12-16 21:36:07.343833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.782 [2024-12-16 21:36:07.343852] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:17.782 [2024-12-16 21:36:07.343871] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:17.782 [2024-12-16 21:36:07.343909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:17.782 [2024-12-16 21:36:07.343927] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:17.782 [2024-12-16 21:36:07.344030] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:17.782 [2024-12-16 21:36:07.344043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:17.782 [2024-12-16 21:36:07.344053] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:17.782 [2024-12-16 21:36:07.344066] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344074] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344081] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:17.782 [2024-12-16 21:36:07.344088] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:17.782 [2024-12-16 21:36:07.344095] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:17.782 [2024-12-16 21:36:07.344102] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:17.782 [2024-12-16 21:36:07.344110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.782 [2024-12-16 21:36:07.344119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:17.782 [2024-12-16 21:36:07.344130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:30:17.782 [2024-12-16 21:36:07.344136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.782 [2024-12-16 21:36:07.344223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.782 [2024-12-16 21:36:07.344231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:17.782 [2024-12-16 21:36:07.344239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:17.782 [2024-12-16 21:36:07.344246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.782 [2024-12-16 21:36:07.344347] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:17.782 [2024-12-16 21:36:07.344367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:17.782 [2024-12-16 21:36:07.344378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:17.782 [2024-12-16 21:36:07.344408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:17.782 [2024-12-16 21:36:07.344423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:17.782 [2024-12-16 21:36:07.344432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:17.782 [2024-12-16 21:36:07.344439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:17.782 [2024-12-16 21:36:07.344455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:17.782 [2024-12-16 21:36:07.344462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:17.782 [2024-12-16 21:36:07.344480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:17.782 [2024-12-16 21:36:07.344488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:17.782 [2024-12-16 21:36:07.344507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:17.782 [2024-12-16 21:36:07.344514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:17.782 [2024-12-16 21:36:07.344529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:17.782 [2024-12-16 21:36:07.344537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:17.782 [2024-12-16 21:36:07.344552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:17.782 [2024-12-16 21:36:07.344559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:17.782 [2024-12-16 21:36:07.344575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:17.782 [2024-12-16 21:36:07.344582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:17.782 [2024-12-16 21:36:07.344597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:17.782 [2024-12-16 21:36:07.344605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:17.782 [2024-12-16 21:36:07.344621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:17.782 [2024-12-16 21:36:07.344641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:17.782 [2024-12-16 21:36:07.344656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:17.782 [2024-12-16 21:36:07.344678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:17.782 [2024-12-16 21:36:07.344700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:17.782 [2024-12-16 21:36:07.344707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:17.782 [2024-12-16 21:36:07.344722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:17.782 [2024-12-16 21:36:07.344732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:17.782 [2024-12-16 21:36:07.344749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:17.782 [2024-12-16 21:36:07.344759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:17.782 [2024-12-16 21:36:07.344767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:17.782 [2024-12-16 21:36:07.344775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:17.782 [2024-12-16 21:36:07.344782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:17.782 [2024-12-16 21:36:07.344790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:17.782 [2024-12-16 21:36:07.344799] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:17.782 [2024-12-16 21:36:07.344810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:17.782 [2024-12-16 21:36:07.344834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:17.782 [2024-12-16 21:36:07.344858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:17.782 [2024-12-16 21:36:07.344867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:17.782 [2024-12-16 21:36:07.344875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:17.782 [2024-12-16 21:36:07.344883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:17.782 [2024-12-16 21:36:07.344935] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:17.782 [2024-12-16 21:36:07.344944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:17.782 [2024-12-16 21:36:07.344958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:17.782 [2024-12-16 21:36:07.344966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:17.783 [2024-12-16 21:36:07.344978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:17.783 [2024-12-16 21:36:07.344985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.783 [2024-12-16 21:36:07.344994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:17.783 [2024-12-16 21:36:07.345002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.709 ms 00:30:17.783 [2024-12-16 21:36:07.345009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.783 [2024-12-16 21:36:07.345048] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:17.783 [2024-12-16 21:36:07.345065] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:21.990 [2024-12-16 21:36:11.510694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.510773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:21.990 [2024-12-16 21:36:11.510789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4165.627 ms 00:30:21.990 [2024-12-16 21:36:11.510809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.524988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.525044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:21.990 [2024-12-16 21:36:11.525059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.052 ms 00:30:21.990 [2024-12-16 21:36:11.525069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.525175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.525187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:21.990 [2024-12-16 21:36:11.525199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:21.990 [2024-12-16 21:36:11.525208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.537998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.538052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:21.990 [2024-12-16 21:36:11.538064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.742 ms 00:30:21.990 [2024-12-16 21:36:11.538073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.538106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.538115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:21.990 [2024-12-16 21:36:11.538124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:21.990 [2024-12-16 21:36:11.538136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.538764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.538797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:21.990 [2024-12-16 21:36:11.538808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:30:21.990 [2024-12-16 21:36:11.538818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.538873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.538884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:21.990 [2024-12-16 21:36:11.538894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:21.990 [2024-12-16 21:36:11.538903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.548220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.548261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:21.990 [2024-12-16 21:36:11.548282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.284 ms 00:30:21.990 [2024-12-16 21:36:11.548295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.568486] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:21.990 [2024-12-16 21:36:11.568564] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:21.990 [2024-12-16 21:36:11.568594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.568608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:21.990 [2024-12-16 21:36:11.568622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.185 ms 00:30:21.990 [2024-12-16 21:36:11.568654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.575500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.575553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:21.990 [2024-12-16 21:36:11.575569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.765 ms 00:30:21.990 [2024-12-16 21:36:11.575580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.578707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.578756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:21.990 [2024-12-16 21:36:11.578770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.047 ms 00:30:21.990 [2024-12-16 21:36:11.578781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.581904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.581947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:21.990 [2024-12-16 21:36:11.581957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.063 ms 00:30:21.990 [2024-12-16 21:36:11.581964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.582323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.582350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:21.990 [2024-12-16 21:36:11.582361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:30:21.990 [2024-12-16 21:36:11.582368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.608993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.609041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:21.990 [2024-12-16 21:36:11.609053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.600 ms 00:30:21.990 [2024-12-16 21:36:11.609061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.617492] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:21.990 [2024-12-16 21:36:11.618394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.618435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:21.990 [2024-12-16 21:36:11.618447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.279 ms 00:30:21.990 [2024-12-16 21:36:11.618455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.618534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.618545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:21.990 [2024-12-16 21:36:11.618554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:21.990 [2024-12-16 21:36:11.618562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.618609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.618620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:21.990 [2024-12-16 21:36:11.618650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:21.990 [2024-12-16 21:36:11.618658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.618680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.618689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:21.990 [2024-12-16 21:36:11.618697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:21.990 [2024-12-16 21:36:11.618705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.618746] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:21.990 [2024-12-16 21:36:11.618757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.618765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:21.990 [2024-12-16 21:36:11.618774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:21.990 [2024-12-16 21:36:11.618785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.623806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.623848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:21.990 [2024-12-16 21:36:11.623871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.002 ms 00:30:21.990 [2024-12-16 21:36:11.623880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.623960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.990 [2024-12-16 21:36:11.623975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:21.990 [2024-12-16 21:36:11.623984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:21.990 [2024-12-16 21:36:11.623993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.990 [2024-12-16 21:36:11.625689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4292.731 ms, result 0 00:30:21.990 [2024-12-16 21:36:11.638942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.991 [2024-12-16 21:36:11.654978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:21.991 [2024-12-16 21:36:11.663071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:22.251 21:36:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.251 21:36:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:22.251 21:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:22.251 21:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:22.251 21:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:22.251 [2024-12-16 21:36:11.911124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.251 [2024-12-16 21:36:11.911174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:22.251 [2024-12-16 21:36:11.911188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:22.251 [2024-12-16 21:36:11.911197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.251 [2024-12-16 21:36:11.911221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.251 [2024-12-16 21:36:11.911230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:22.251 [2024-12-16 21:36:11.911242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:22.251 [2024-12-16 21:36:11.911251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.251 [2024-12-16 21:36:11.911271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.251 [2024-12-16 21:36:11.911285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:22.251 [2024-12-16 21:36:11.911293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:22.251 [2024-12-16 21:36:11.911300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.251 [2024-12-16 21:36:11.911359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.226 ms, result 0 00:30:22.251 true 00:30:22.251 21:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:22.511 { 00:30:22.511 "name": "ftl", 00:30:22.511 "properties": [ 00:30:22.511 { 00:30:22.511 "name": "superblock_version", 00:30:22.511 "value": 5, 00:30:22.511 "read-only": true 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "name": "base_device", 00:30:22.511 "bands": [ 00:30:22.511 { 00:30:22.511 "id": 0, 00:30:22.511 "state": "CLOSED", 00:30:22.511 "validity": 1.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 1, 00:30:22.511 "state": "CLOSED", 00:30:22.511 "validity": 1.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 2, 00:30:22.511 "state": "CLOSED", 00:30:22.511 "validity": 0.007843137254901933 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 3, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 4, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 5, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 6, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 7, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 8, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 9, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 10, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 11, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 12, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 13, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 14, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 15, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 16, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 17, 00:30:22.511 "state": "FREE", 00:30:22.511 "validity": 0.0 00:30:22.511 } 00:30:22.511 ], 00:30:22.511 "read-only": true 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "name": "cache_device", 00:30:22.511 "type": "bdev", 00:30:22.511 "chunks": [ 00:30:22.511 { 00:30:22.511 "id": 0, 00:30:22.511 "state": "INACTIVE", 00:30:22.511 "utilization": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 1, 00:30:22.511 "state": "OPEN", 00:30:22.511 "utilization": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 2, 00:30:22.511 "state": "OPEN", 00:30:22.511 "utilization": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 3, 00:30:22.511 "state": "FREE", 00:30:22.511 "utilization": 0.0 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "id": 4, 00:30:22.511 "state": "FREE", 00:30:22.511 "utilization": 0.0 00:30:22.511 } 00:30:22.511 ], 00:30:22.511 "read-only": true 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "name": "verbose_mode", 00:30:22.511 "value": true, 00:30:22.511 "unit": "", 00:30:22.511 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "name": "prep_upgrade_on_shutdown", 00:30:22.511 "value": false, 00:30:22.511 "unit": "", 00:30:22.511 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:22.511 } 00:30:22.511 ] 00:30:22.511 } 00:30:22.511 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:22.511 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:22.511 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:22.772 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:22.772 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:22.772 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:22.772 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:22.772 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:23.033 Validate MD5 checksum, iteration 1 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:23.033 21:36:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:23.033 [2024-12-16 21:36:12.661424] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:23.033 [2024-12-16 21:36:12.661569] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96712 ] 00:30:23.294 [2024-12-16 21:36:12.808530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.294 [2024-12-16 21:36:12.837039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.681  [2024-12-16T21:36:15.325Z] Copying: 526/1024 [MB] (526 MBps) [2024-12-16T21:36:15.896Z] Copying: 1024/1024 [MB] (average 520 MBps) 00:30:26.196 00:30:26.196 21:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:26.196 21:36:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=91f191be75d1dcecb84c785913f2f42e 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 91f191be75d1dcecb84c785913f2f42e != \9\1\f\1\9\1\b\e\7\5\d\1\d\c\e\c\b\8\4\c\7\8\5\9\1\3\f\2\f\4\2\e ]] 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:28.727 Validate MD5 checksum, iteration 2 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:28.727 21:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.727 [2024-12-16 21:36:17.917906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:28.727 [2024-12-16 21:36:17.918020] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96769 ] 00:30:28.727 [2024-12-16 21:36:18.068842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.727 [2024-12-16 21:36:18.086773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.111  [2024-12-16T21:36:20.379Z] Copying: 571/1024 [MB] (571 MBps) [2024-12-16T21:36:22.924Z] Copying: 1024/1024 [MB] (average 557 MBps) 00:30:33.224 00:30:33.224 21:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:33.224 21:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b6a133d0292bc8038064e9beaac06260 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b6a133d0292bc8038064e9beaac06260 != \b\6\a\1\3\3\d\0\2\9\2\b\c\8\0\3\8\0\6\4\e\9\b\e\a\a\c\0\6\2\6\0 ]] 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 96633 ]] 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 96633 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96847 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96847 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96847 ']' 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:35.768 21:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:35.768 [2024-12-16 21:36:24.972310] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:35.768 [2024-12-16 21:36:24.972439] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96847 ] 00:30:35.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 96633 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:35.768 [2024-12-16 21:36:25.114524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.768 [2024-12-16 21:36:25.143802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.768 [2024-12-16 21:36:25.438463] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:35.768 [2024-12-16 21:36:25.438524] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:36.029 [2024-12-16 21:36:25.584492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.029 [2024-12-16 21:36:25.584529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:36.029 [2024-12-16 21:36:25.584542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:36.029 [2024-12-16 21:36:25.584549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.029 [2024-12-16 21:36:25.584590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.029 [2024-12-16 21:36:25.584599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:36.029 [2024-12-16 21:36:25.584607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:36.029 [2024-12-16 21:36:25.584613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.029 [2024-12-16 21:36:25.584637] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:36.029 [2024-12-16 21:36:25.584830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:36.029 [2024-12-16 21:36:25.584843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.029 [2024-12-16 21:36:25.584850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:36.029 [2024-12-16 21:36:25.584859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:30:36.029 [2024-12-16 21:36:25.584865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.029 [2024-12-16 21:36:25.585072] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:36.029 [2024-12-16 21:36:25.589646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.029 [2024-12-16 21:36:25.589682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:36.029 [2024-12-16 21:36:25.589691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.575 ms 00:30:36.029 [2024-12-16 21:36:25.589697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.029 [2024-12-16 21:36:25.590646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.029 [2024-12-16 21:36:25.590670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:36.029 [2024-12-16 21:36:25.590679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:36.029 [2024-12-16 21:36:25.590687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.029 [2024-12-16 21:36:25.590892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.029 [2024-12-16 21:36:25.590902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:36.029 [2024-12-16 21:36:25.590908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.169 ms 00:30:36.029 [2024-12-16 21:36:25.590914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.029 [2024-12-16 21:36:25.590943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.029 [2024-12-16 21:36:25.590950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:36.029 [2024-12-16 21:36:25.590956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:36.030 [2024-12-16 21:36:25.590961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.030 [2024-12-16 21:36:25.590981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.030 [2024-12-16 21:36:25.590990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:36.030 [2024-12-16 21:36:25.590999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:36.030 [2024-12-16 21:36:25.591005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.030 [2024-12-16 21:36:25.591020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:36.030 [2024-12-16 21:36:25.591704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.030 [2024-12-16 21:36:25.591723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:36.030 [2024-12-16 21:36:25.591730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.687 ms 00:30:36.030 [2024-12-16 21:36:25.591736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.030 [2024-12-16 21:36:25.591758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.030 [2024-12-16 21:36:25.591766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:36.030 [2024-12-16 21:36:25.591774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:36.030 [2024-12-16 21:36:25.591780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.030 [2024-12-16 21:36:25.591797] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:36.030 [2024-12-16 21:36:25.591815] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:36.030 [2024-12-16 21:36:25.591841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:36.030 [2024-12-16 21:36:25.591855] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:36.030 [2024-12-16 21:36:25.591939] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:36.030 [2024-12-16 21:36:25.591949] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:36.030 [2024-12-16 21:36:25.591957] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:36.030 [2024-12-16 21:36:25.591965] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:36.030 [2024-12-16 21:36:25.591974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:36.030 [2024-12-16 21:36:25.591980] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:36.030 [2024-12-16 21:36:25.591986] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:36.030 [2024-12-16 21:36:25.591991] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:36.030 [2024-12-16 21:36:25.591997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:36.030 [2024-12-16 21:36:25.592006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.030 [2024-12-16 21:36:25.592012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:36.030 [2024-12-16 21:36:25.592023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:30:36.030 [2024-12-16 21:36:25.592029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.030 [2024-12-16 21:36:25.592093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.030 [2024-12-16 21:36:25.592105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:36.030 [2024-12-16 21:36:25.592113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:36.030 [2024-12-16 21:36:25.592119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.030 [2024-12-16 21:36:25.592194] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:36.030 [2024-12-16 21:36:25.592203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:36.030 [2024-12-16 21:36:25.592210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:36.030 [2024-12-16 21:36:25.592234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:36.030 [2024-12-16 21:36:25.592245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:36.030 [2024-12-16 21:36:25.592251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:36.030 [2024-12-16 21:36:25.592255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:36.030 [2024-12-16 21:36:25.592267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:36.030 [2024-12-16 21:36:25.592272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:36.030 [2024-12-16 21:36:25.592289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:36.030 [2024-12-16 21:36:25.592294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:36.030 [2024-12-16 21:36:25.592304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:36.030 [2024-12-16 21:36:25.592309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:36.030 [2024-12-16 21:36:25.592319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:36.030 [2024-12-16 21:36:25.592324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:36.030 [2024-12-16 21:36:25.592334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:36.030 [2024-12-16 21:36:25.592339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:36.030 [2024-12-16 21:36:25.592349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:36.030 [2024-12-16 21:36:25.592354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:36.030 [2024-12-16 21:36:25.592365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:36.030 [2024-12-16 21:36:25.592371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:36.030 [2024-12-16 21:36:25.592381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:36.030 [2024-12-16 21:36:25.592385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:36.030 [2024-12-16 21:36:25.592395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:36.030 [2024-12-16 21:36:25.592411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:36.030 [2024-12-16 21:36:25.592426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:36.030 [2024-12-16 21:36:25.592431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592436] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:36.030 [2024-12-16 21:36:25.592443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:36.030 [2024-12-16 21:36:25.592448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.030 [2024-12-16 21:36:25.592461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:36.030 [2024-12-16 21:36:25.592466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:36.030 [2024-12-16 21:36:25.592471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:36.030 [2024-12-16 21:36:25.592476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:36.030 [2024-12-16 21:36:25.592481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:36.030 [2024-12-16 21:36:25.592486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:36.030 [2024-12-16 21:36:25.592493] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:36.030 [2024-12-16 21:36:25.592499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:36.030 [2024-12-16 21:36:25.592507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:36.030 [2024-12-16 21:36:25.592515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:36.030 [2024-12-16 21:36:25.592520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:36.030 [2024-12-16 21:36:25.592526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:36.030 [2024-12-16 21:36:25.592532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:36.030 [2024-12-16 21:36:25.592538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:36.030 [2024-12-16 21:36:25.592543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:36.031 [2024-12-16 21:36:25.592551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:36.031 [2024-12-16 21:36:25.592590] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:36.031 [2024-12-16 21:36:25.592596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:36.031 [2024-12-16 21:36:25.592612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:36.031 [2024-12-16 21:36:25.592617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:36.031 [2024-12-16 21:36:25.592623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:36.031 [2024-12-16 21:36:25.592640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.592651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:36.031 [2024-12-16 21:36:25.592657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.498 ms 00:30:36.031 [2024-12-16 21:36:25.592665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.601610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.601699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:36.031 [2024-12-16 21:36:25.601715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.896 ms 00:30:36.031 [2024-12-16 21:36:25.601727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.601778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.601790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:36.031 [2024-12-16 21:36:25.601801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:36.031 [2024-12-16 21:36:25.601813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.613420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.613462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:36.031 [2024-12-16 21:36:25.613478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.547 ms 00:30:36.031 [2024-12-16 21:36:25.613489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.613535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.613547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:36.031 [2024-12-16 21:36:25.613562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:36.031 [2024-12-16 21:36:25.613578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.613704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.613724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:36.031 [2024-12-16 21:36:25.613738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:30:36.031 [2024-12-16 21:36:25.613750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.613805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.613818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:36.031 [2024-12-16 21:36:25.613829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:36.031 [2024-12-16 21:36:25.613842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.621514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.621555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:36.031 [2024-12-16 21:36:25.621569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.644 ms 00:30:36.031 [2024-12-16 21:36:25.621580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.621698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.621716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:36.031 [2024-12-16 21:36:25.621732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:36.031 [2024-12-16 21:36:25.621742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.637692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.637733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:36.031 [2024-12-16 21:36:25.637755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.924 ms 00:30:36.031 [2024-12-16 21:36:25.637767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.639159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.639194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:36.031 [2024-12-16 21:36:25.639211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:30:36.031 [2024-12-16 21:36:25.639226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.657659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.657704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:36.031 [2024-12-16 21:36:25.657720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.387 ms 00:30:36.031 [2024-12-16 21:36:25.657730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.657893] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:36.031 [2024-12-16 21:36:25.658045] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:36.031 [2024-12-16 21:36:25.658190] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:36.031 [2024-12-16 21:36:25.658327] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:36.031 [2024-12-16 21:36:25.658351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.658363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:36.031 [2024-12-16 21:36:25.658378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 00:30:36.031 [2024-12-16 21:36:25.658389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.658452] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:36.031 [2024-12-16 21:36:25.658467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.658477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:36.031 [2024-12-16 21:36:25.658489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:36.031 [2024-12-16 21:36:25.658499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.662204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.662250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:36.031 [2024-12-16 21:36:25.662266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.678 ms 00:30:36.031 [2024-12-16 21:36:25.662282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.662963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.662998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:36.031 [2024-12-16 21:36:25.663012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:36.031 [2024-12-16 21:36:25.663022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.031 [2024-12-16 21:36:25.663118] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:36.031 [2024-12-16 21:36:25.663296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.031 [2024-12-16 21:36:25.663315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:36.031 [2024-12-16 21:36:25.663331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:30:36.031 [2024-12-16 21:36:25.663344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.603 [2024-12-16 21:36:26.228563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.603 [2024-12-16 21:36:26.228658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:36.603 [2024-12-16 21:36:26.228676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 564.860 ms 00:30:36.603 [2024-12-16 21:36:26.228686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.603 [2024-12-16 21:36:26.230790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.603 [2024-12-16 21:36:26.230830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:36.603 [2024-12-16 21:36:26.230848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.677 ms 00:30:36.603 [2024-12-16 21:36:26.230856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.603 [2024-12-16 21:36:26.231836] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:36.603 [2024-12-16 21:36:26.231877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.603 [2024-12-16 21:36:26.231888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:36.603 [2024-12-16 21:36:26.231898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.990 ms 00:30:36.603 [2024-12-16 21:36:26.231915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.603 [2024-12-16 21:36:26.231953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.603 [2024-12-16 21:36:26.231966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:36.603 [2024-12-16 21:36:26.231975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:36.603 [2024-12-16 21:36:26.231984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.603 [2024-12-16 21:36:26.232020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 568.914 ms, result 0 00:30:36.603 [2024-12-16 21:36:26.232061] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:36.603 [2024-12-16 21:36:26.232127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.603 [2024-12-16 21:36:26.232146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:36.603 [2024-12-16 21:36:26.232155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:30:36.603 [2024-12-16 21:36:26.232163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.907589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.907649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:37.546 [2024-12-16 21:36:26.907662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 674.894 ms 00:30:37.546 [2024-12-16 21:36:26.907668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.909360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.909392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:37.546 [2024-12-16 21:36:26.909400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.227 ms 00:30:37.546 [2024-12-16 21:36:26.909407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.910077] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:37.546 [2024-12-16 21:36:26.910102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.910109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:37.546 [2024-12-16 21:36:26.910117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.674 ms 00:30:37.546 [2024-12-16 21:36:26.910124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.910150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.910158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:37.546 [2024-12-16 21:36:26.910165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:37.546 [2024-12-16 21:36:26.910171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.910200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 678.137 ms, result 0 00:30:37.546 [2024-12-16 21:36:26.910238] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:37.546 [2024-12-16 21:36:26.910247] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:37.546 [2024-12-16 21:36:26.910256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.910268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:37.546 [2024-12-16 21:36:26.910275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1247.172 ms 00:30:37.546 [2024-12-16 21:36:26.910284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.910308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.910316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:37.546 [2024-12-16 21:36:26.910322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:37.546 [2024-12-16 21:36:26.910329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.917534] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:37.546 [2024-12-16 21:36:26.917622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.917643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:37.546 [2024-12-16 21:36:26.917651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.279 ms 00:30:37.546 [2024-12-16 21:36:26.917661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.918174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.918189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:37.546 [2024-12-16 21:36:26.918197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:30:37.546 [2024-12-16 21:36:26.918202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.919861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.919882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:37.546 [2024-12-16 21:36:26.919898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.646 ms 00:30:37.546 [2024-12-16 21:36:26.919904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.919945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.919953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:37.546 [2024-12-16 21:36:26.919959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:37.546 [2024-12-16 21:36:26.919966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.920048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.920057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:37.546 [2024-12-16 21:36:26.920066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:37.546 [2024-12-16 21:36:26.920072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.920091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.920103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:37.546 [2024-12-16 21:36:26.920110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:37.546 [2024-12-16 21:36:26.920116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.920145] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:37.546 [2024-12-16 21:36:26.920153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.920159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:37.546 [2024-12-16 21:36:26.920166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:37.546 [2024-12-16 21:36:26.920173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.920213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.546 [2024-12-16 21:36:26.920226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:37.546 [2024-12-16 21:36:26.920232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:37.546 [2024-12-16 21:36:26.920238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.546 [2024-12-16 21:36:26.921180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1336.265 ms, result 0 00:30:37.546 [2024-12-16 21:36:26.933918] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.546 [2024-12-16 21:36:26.949900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:37.546 [2024-12-16 21:36:26.958002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:38.118 Validate MD5 checksum, iteration 1 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:38.118 21:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:38.118 [2024-12-16 21:36:27.606774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:38.119 [2024-12-16 21:36:27.606879] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96876 ] 00:30:38.119 [2024-12-16 21:36:27.751465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.119 [2024-12-16 21:36:27.770048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.506  [2024-12-16T21:36:30.150Z] Copying: 537/1024 [MB] (537 MBps) [2024-12-16T21:36:32.063Z] Copying: 1024/1024 [MB] (average 522 MBps) 00:30:42.363 00:30:42.363 21:36:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:42.363 21:36:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:44.275 Validate MD5 checksum, iteration 2 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=91f191be75d1dcecb84c785913f2f42e 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 91f191be75d1dcecb84c785913f2f42e != \9\1\f\1\9\1\b\e\7\5\d\1\d\c\e\c\b\8\4\c\7\8\5\9\1\3\f\2\f\4\2\e ]] 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.275 21:36:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:44.275 [2024-12-16 21:36:33.965869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:44.275 [2024-12-16 21:36:33.966114] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96944 ] 00:30:44.536 [2024-12-16 21:36:34.110307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.536 [2024-12-16 21:36:34.129578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.930  [2024-12-16T21:36:36.624Z] Copying: 536/1024 [MB] (536 MBps) [2024-12-16T21:36:38.553Z] Copying: 1024/1024 [MB] (average 526 MBps) 00:30:48.853 00:30:48.853 21:36:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:48.853 21:36:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b6a133d0292bc8038064e9beaac06260 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b6a133d0292bc8038064e9beaac06260 != \b\6\a\1\3\3\d\0\2\9\2\b\c\8\0\3\8\0\6\4\e\9\b\e\a\a\c\0\6\2\6\0 ]] 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96847 ]] 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96847 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 96847 ']' 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 96847 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96847 00:30:50.765 killing process with pid 96847 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96847' 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 96847 00:30:50.765 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 96847 00:30:50.765 [2024-12-16 21:36:40.444572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:50.765 [2024-12-16 21:36:40.449010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.765 [2024-12-16 21:36:40.449050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:50.765 [2024-12-16 21:36:40.449061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:50.765 [2024-12-16 21:36:40.449069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.765 [2024-12-16 21:36:40.449087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:50.765 [2024-12-16 21:36:40.449639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.765 [2024-12-16 21:36:40.449668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:50.766 [2024-12-16 21:36:40.449676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:30:50.766 [2024-12-16 21:36:40.449682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.449859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.449875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:50.766 [2024-12-16 21:36:40.449886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:30:50.766 [2024-12-16 21:36:40.449893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.451236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.451262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:50.766 [2024-12-16 21:36:40.451270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.330 ms 00:30:50.766 [2024-12-16 21:36:40.451281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.452139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.452162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:50.766 [2024-12-16 21:36:40.452170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.833 ms 00:30:50.766 [2024-12-16 21:36:40.452177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.453592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.453624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:50.766 [2024-12-16 21:36:40.453650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.375 ms 00:30:50.766 [2024-12-16 21:36:40.453657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.454872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.454902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:50.766 [2024-12-16 21:36:40.454910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.188 ms 00:30:50.766 [2024-12-16 21:36:40.454917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.454977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.454985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:50.766 [2024-12-16 21:36:40.454992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:50.766 [2024-12-16 21:36:40.455002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.456047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.456083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:50.766 [2024-12-16 21:36:40.456090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.033 ms 00:30:50.766 [2024-12-16 21:36:40.456096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.457567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.457596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:50.766 [2024-12-16 21:36:40.457603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.447 ms 00:30:50.766 [2024-12-16 21:36:40.457609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.458797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.458825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:50.766 [2024-12-16 21:36:40.458832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.150 ms 00:30:50.766 [2024-12-16 21:36:40.458838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.460037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.460065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:50.766 [2024-12-16 21:36:40.460073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.152 ms 00:30:50.766 [2024-12-16 21:36:40.460079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.460105] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:50.766 [2024-12-16 21:36:40.460118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:50.766 [2024-12-16 21:36:40.460127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:50.766 [2024-12-16 21:36:40.460134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:50.766 [2024-12-16 21:36:40.460141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:50.766 [2024-12-16 21:36:40.460236] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:50.766 [2024-12-16 21:36:40.460242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e5177778-d1bf-4c88-a8eb-be7a0f2b0625 00:30:50.766 [2024-12-16 21:36:40.460248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:50.766 [2024-12-16 21:36:40.460255] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:50.766 [2024-12-16 21:36:40.460266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:50.766 [2024-12-16 21:36:40.460273] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:50.766 [2024-12-16 21:36:40.460279] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:50.766 [2024-12-16 21:36:40.460286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:50.766 [2024-12-16 21:36:40.460297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:50.766 [2024-12-16 21:36:40.460302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:50.766 [2024-12-16 21:36:40.460307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:50.766 [2024-12-16 21:36:40.460313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.460320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:50.766 [2024-12-16 21:36:40.460327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:30:50.766 [2024-12-16 21:36:40.460333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.462071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.462099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:50.766 [2024-12-16 21:36:40.462107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.725 ms 00:30:50.766 [2024-12-16 21:36:40.462113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.766 [2024-12-16 21:36:40.462215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.766 [2024-12-16 21:36:40.462228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:50.766 [2024-12-16 21:36:40.462239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:30:50.766 [2024-12-16 21:36:40.462246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.026 [2024-12-16 21:36:40.468324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.026 [2024-12-16 21:36:40.468351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:51.026 [2024-12-16 21:36:40.468359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.026 [2024-12-16 21:36:40.468369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.026 [2024-12-16 21:36:40.468392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.026 [2024-12-16 21:36:40.468399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:51.026 [2024-12-16 21:36:40.468406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.026 [2024-12-16 21:36:40.468412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.468468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.468476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:51.027 [2024-12-16 21:36:40.468483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.468489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.468505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.468511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:51.027 [2024-12-16 21:36:40.468518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.468523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.479906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.479944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:51.027 [2024-12-16 21:36:40.479953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.479959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.488502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.488539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:51.027 [2024-12-16 21:36:40.488548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.488554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.488615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.488623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:51.027 [2024-12-16 21:36:40.488695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.488702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.488735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.488744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:51.027 [2024-12-16 21:36:40.488752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.488757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.488821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.488829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:51.027 [2024-12-16 21:36:40.488836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.488842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.488868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.488876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:51.027 [2024-12-16 21:36:40.488884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.488893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.488928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.488936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:51.027 [2024-12-16 21:36:40.488942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.488949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.488988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:51.027 [2024-12-16 21:36:40.489001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:51.027 [2024-12-16 21:36:40.489007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:51.027 [2024-12-16 21:36:40.489014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.027 [2024-12-16 21:36:40.489124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 40.085 ms, result 0 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.027 Remove shared memory files 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid96633 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:51.027 00:30:51.027 real 1m17.597s 00:30:51.027 user 1m42.305s 00:30:51.027 sys 0m20.717s 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.027 ************************************ 00:30:51.027 END TEST ftl_upgrade_shutdown 00:30:51.027 ************************************ 00:30:51.027 21:36:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.287 21:36:40 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:51.287 21:36:40 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:51.287 21:36:40 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:51.288 21:36:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.288 21:36:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:51.288 ************************************ 00:30:51.288 START TEST ftl_restore_fast 00:30:51.288 ************************************ 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:51.288 * Looking for test storage... 00:30:51.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:51.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.288 --rc genhtml_branch_coverage=1 00:30:51.288 --rc genhtml_function_coverage=1 00:30:51.288 --rc genhtml_legend=1 00:30:51.288 --rc geninfo_all_blocks=1 00:30:51.288 --rc geninfo_unexecuted_blocks=1 00:30:51.288 00:30:51.288 ' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:51.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.288 --rc genhtml_branch_coverage=1 00:30:51.288 --rc genhtml_function_coverage=1 00:30:51.288 --rc genhtml_legend=1 00:30:51.288 --rc geninfo_all_blocks=1 00:30:51.288 --rc geninfo_unexecuted_blocks=1 00:30:51.288 00:30:51.288 ' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:51.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.288 --rc genhtml_branch_coverage=1 00:30:51.288 --rc genhtml_function_coverage=1 00:30:51.288 --rc genhtml_legend=1 00:30:51.288 --rc geninfo_all_blocks=1 00:30:51.288 --rc geninfo_unexecuted_blocks=1 00:30:51.288 00:30:51.288 ' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:51.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.288 --rc genhtml_branch_coverage=1 00:30:51.288 --rc genhtml_function_coverage=1 00:30:51.288 --rc genhtml_legend=1 00:30:51.288 --rc geninfo_all_blocks=1 00:30:51.288 --rc geninfo_unexecuted_blocks=1 00:30:51.288 00:30:51.288 ' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Knv11zJolD 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=97094 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 97094 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 97094 ']' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:51.288 21:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.549 [2024-12-16 21:36:41.007219] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:51.549 [2024-12-16 21:36:41.007340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97094 ] 00:30:51.549 [2024-12-16 21:36:41.150772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.549 [2024-12-16 21:36:41.174594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:52.490 21:36:41 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:52.490 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:52.749 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:52.749 { 00:30:52.749 "name": "nvme0n1", 00:30:52.749 "aliases": [ 00:30:52.749 "535d2b0a-df0b-40c7-8ee1-47cde2990147" 00:30:52.749 ], 00:30:52.749 "product_name": "NVMe disk", 00:30:52.749 "block_size": 4096, 00:30:52.749 "num_blocks": 1310720, 00:30:52.750 "uuid": "535d2b0a-df0b-40c7-8ee1-47cde2990147", 00:30:52.750 "numa_id": -1, 00:30:52.750 "assigned_rate_limits": { 00:30:52.750 "rw_ios_per_sec": 0, 00:30:52.750 "rw_mbytes_per_sec": 0, 00:30:52.750 "r_mbytes_per_sec": 0, 00:30:52.750 "w_mbytes_per_sec": 0 00:30:52.750 }, 00:30:52.750 "claimed": true, 00:30:52.750 "claim_type": "read_many_write_one", 00:30:52.750 "zoned": false, 00:30:52.750 "supported_io_types": { 00:30:52.750 "read": true, 00:30:52.750 "write": true, 00:30:52.750 "unmap": true, 00:30:52.750 "flush": true, 00:30:52.750 "reset": true, 00:30:52.750 "nvme_admin": true, 00:30:52.750 "nvme_io": true, 00:30:52.750 "nvme_io_md": false, 00:30:52.750 "write_zeroes": true, 00:30:52.750 "zcopy": false, 00:30:52.750 "get_zone_info": false, 00:30:52.750 "zone_management": false, 00:30:52.750 "zone_append": false, 00:30:52.750 "compare": true, 00:30:52.750 "compare_and_write": false, 00:30:52.750 "abort": true, 00:30:52.750 "seek_hole": false, 00:30:52.750 "seek_data": false, 00:30:52.750 "copy": true, 00:30:52.750 "nvme_iov_md": false 00:30:52.750 }, 00:30:52.750 "driver_specific": { 00:30:52.750 "nvme": [ 00:30:52.750 { 00:30:52.750 "pci_address": "0000:00:11.0", 00:30:52.750 "trid": { 00:30:52.750 "trtype": "PCIe", 00:30:52.750 "traddr": "0000:00:11.0" 00:30:52.750 }, 00:30:52.750 "ctrlr_data": { 00:30:52.750 "cntlid": 0, 00:30:52.750 "vendor_id": "0x1b36", 00:30:52.750 "model_number": "QEMU NVMe Ctrl", 00:30:52.750 "serial_number": "12341", 00:30:52.750 "firmware_revision": "8.0.0", 00:30:52.750 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:52.750 "oacs": { 00:30:52.750 "security": 0, 00:30:52.750 "format": 1, 00:30:52.750 "firmware": 0, 00:30:52.750 "ns_manage": 1 00:30:52.750 }, 00:30:52.750 "multi_ctrlr": false, 00:30:52.750 "ana_reporting": false 00:30:52.750 }, 00:30:52.750 "vs": { 00:30:52.750 "nvme_version": "1.4" 00:30:52.750 }, 00:30:52.750 "ns_data": { 00:30:52.750 "id": 1, 00:30:52.750 "can_share": false 00:30:52.750 } 00:30:52.750 } 00:30:52.750 ], 00:30:52.750 "mp_policy": "active_passive" 00:30:52.750 } 00:30:52.750 } 00:30:52.750 ]' 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:52.750 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:53.008 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=0ec4f3eb-7701-4e0f-8bc9-c23457528cde 00:30:53.008 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:53.008 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ec4f3eb-7701-4e0f-8bc9-c23457528cde 00:30:53.266 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:53.524 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=54762fea-a5ec-4da6-a34b-5a18ba199f7e 00:30:53.524 21:36:42 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 54762fea-a5ec-4da6-a34b-5a18ba199f7e 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:53.524 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:53.782 { 00:30:53.782 "name": "6efe3f69-536a-46b1-9e78-82c0c0afc89f", 00:30:53.782 "aliases": [ 00:30:53.782 "lvs/nvme0n1p0" 00:30:53.782 ], 00:30:53.782 "product_name": "Logical Volume", 00:30:53.782 "block_size": 4096, 00:30:53.782 "num_blocks": 26476544, 00:30:53.782 "uuid": "6efe3f69-536a-46b1-9e78-82c0c0afc89f", 00:30:53.782 "assigned_rate_limits": { 00:30:53.782 "rw_ios_per_sec": 0, 00:30:53.782 "rw_mbytes_per_sec": 0, 00:30:53.782 "r_mbytes_per_sec": 0, 00:30:53.782 "w_mbytes_per_sec": 0 00:30:53.782 }, 00:30:53.782 "claimed": false, 00:30:53.782 "zoned": false, 00:30:53.782 "supported_io_types": { 00:30:53.782 "read": true, 00:30:53.782 "write": true, 00:30:53.782 "unmap": true, 00:30:53.782 "flush": false, 00:30:53.782 "reset": true, 00:30:53.782 "nvme_admin": false, 00:30:53.782 "nvme_io": false, 00:30:53.782 "nvme_io_md": false, 00:30:53.782 "write_zeroes": true, 00:30:53.782 "zcopy": false, 00:30:53.782 "get_zone_info": false, 00:30:53.782 "zone_management": false, 00:30:53.782 "zone_append": false, 00:30:53.782 "compare": false, 00:30:53.782 "compare_and_write": false, 00:30:53.782 "abort": false, 00:30:53.782 "seek_hole": true, 00:30:53.782 "seek_data": true, 00:30:53.782 "copy": false, 00:30:53.782 "nvme_iov_md": false 00:30:53.782 }, 00:30:53.782 "driver_specific": { 00:30:53.782 "lvol": { 00:30:53.782 "lvol_store_uuid": "54762fea-a5ec-4da6-a34b-5a18ba199f7e", 00:30:53.782 "base_bdev": "nvme0n1", 00:30:53.782 "thin_provision": true, 00:30:53.782 "num_allocated_clusters": 0, 00:30:53.782 "snapshot": false, 00:30:53.782 "clone": false, 00:30:53.782 "esnap_clone": false 00:30:53.782 } 00:30:53.782 } 00:30:53.782 } 00:30:53.782 ]' 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:53.782 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:54.040 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:54.299 { 00:30:54.299 "name": "6efe3f69-536a-46b1-9e78-82c0c0afc89f", 00:30:54.299 "aliases": [ 00:30:54.299 "lvs/nvme0n1p0" 00:30:54.299 ], 00:30:54.299 "product_name": "Logical Volume", 00:30:54.299 "block_size": 4096, 00:30:54.299 "num_blocks": 26476544, 00:30:54.299 "uuid": "6efe3f69-536a-46b1-9e78-82c0c0afc89f", 00:30:54.299 "assigned_rate_limits": { 00:30:54.299 "rw_ios_per_sec": 0, 00:30:54.299 "rw_mbytes_per_sec": 0, 00:30:54.299 "r_mbytes_per_sec": 0, 00:30:54.299 "w_mbytes_per_sec": 0 00:30:54.299 }, 00:30:54.299 "claimed": false, 00:30:54.299 "zoned": false, 00:30:54.299 "supported_io_types": { 00:30:54.299 "read": true, 00:30:54.299 "write": true, 00:30:54.299 "unmap": true, 00:30:54.299 "flush": false, 00:30:54.299 "reset": true, 00:30:54.299 "nvme_admin": false, 00:30:54.299 "nvme_io": false, 00:30:54.299 "nvme_io_md": false, 00:30:54.299 "write_zeroes": true, 00:30:54.299 "zcopy": false, 00:30:54.299 "get_zone_info": false, 00:30:54.299 "zone_management": false, 00:30:54.299 "zone_append": false, 00:30:54.299 "compare": false, 00:30:54.299 "compare_and_write": false, 00:30:54.299 "abort": false, 00:30:54.299 "seek_hole": true, 00:30:54.299 "seek_data": true, 00:30:54.299 "copy": false, 00:30:54.299 "nvme_iov_md": false 00:30:54.299 }, 00:30:54.299 "driver_specific": { 00:30:54.299 "lvol": { 00:30:54.299 "lvol_store_uuid": "54762fea-a5ec-4da6-a34b-5a18ba199f7e", 00:30:54.299 "base_bdev": "nvme0n1", 00:30:54.299 "thin_provision": true, 00:30:54.299 "num_allocated_clusters": 0, 00:30:54.299 "snapshot": false, 00:30:54.299 "clone": false, 00:30:54.299 "esnap_clone": false 00:30:54.299 } 00:30:54.299 } 00:30:54.299 } 00:30:54.299 ]' 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:54.299 21:36:43 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:54.557 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:54.557 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:54.557 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:54.557 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:54.557 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:54.557 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:54.557 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6efe3f69-536a-46b1-9e78-82c0c0afc89f 00:30:54.815 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:54.815 { 00:30:54.815 "name": "6efe3f69-536a-46b1-9e78-82c0c0afc89f", 00:30:54.815 "aliases": [ 00:30:54.815 "lvs/nvme0n1p0" 00:30:54.815 ], 00:30:54.815 "product_name": "Logical Volume", 00:30:54.815 "block_size": 4096, 00:30:54.815 "num_blocks": 26476544, 00:30:54.815 "uuid": "6efe3f69-536a-46b1-9e78-82c0c0afc89f", 00:30:54.815 "assigned_rate_limits": { 00:30:54.815 "rw_ios_per_sec": 0, 00:30:54.815 "rw_mbytes_per_sec": 0, 00:30:54.815 "r_mbytes_per_sec": 0, 00:30:54.815 "w_mbytes_per_sec": 0 00:30:54.815 }, 00:30:54.815 "claimed": false, 00:30:54.815 "zoned": false, 00:30:54.815 "supported_io_types": { 00:30:54.815 "read": true, 00:30:54.815 "write": true, 00:30:54.815 "unmap": true, 00:30:54.815 "flush": false, 00:30:54.815 "reset": true, 00:30:54.815 "nvme_admin": false, 00:30:54.815 "nvme_io": false, 00:30:54.815 "nvme_io_md": false, 00:30:54.815 "write_zeroes": true, 00:30:54.815 "zcopy": false, 00:30:54.815 "get_zone_info": false, 00:30:54.815 "zone_management": false, 00:30:54.815 "zone_append": false, 00:30:54.815 "compare": false, 00:30:54.815 "compare_and_write": false, 00:30:54.815 "abort": false, 00:30:54.815 "seek_hole": true, 00:30:54.815 "seek_data": true, 00:30:54.815 "copy": false, 00:30:54.815 "nvme_iov_md": false 00:30:54.815 }, 00:30:54.815 "driver_specific": { 00:30:54.815 "lvol": { 00:30:54.815 "lvol_store_uuid": "54762fea-a5ec-4da6-a34b-5a18ba199f7e", 00:30:54.815 "base_bdev": "nvme0n1", 00:30:54.815 "thin_provision": true, 00:30:54.815 "num_allocated_clusters": 0, 00:30:54.815 "snapshot": false, 00:30:54.815 "clone": false, 00:30:54.815 "esnap_clone": false 00:30:54.815 } 00:30:54.815 } 00:30:54.815 } 00:30:54.815 ]' 00:30:54.815 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6efe3f69-536a-46b1-9e78-82c0c0afc89f --l2p_dram_limit 10' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:54.816 21:36:44 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6efe3f69-536a-46b1-9e78-82c0c0afc89f --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:55.075 [2024-12-16 21:36:44.519838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.519877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:55.075 [2024-12-16 21:36:44.519887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:55.075 [2024-12-16 21:36:44.519900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.519942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.519952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:55.075 [2024-12-16 21:36:44.519960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:55.075 [2024-12-16 21:36:44.519969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.519984] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:55.075 [2024-12-16 21:36:44.520172] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:55.075 [2024-12-16 21:36:44.520186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.520194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:55.075 [2024-12-16 21:36:44.520200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:30:55.075 [2024-12-16 21:36:44.520208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.520231] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5 00:30:55.075 [2024-12-16 21:36:44.521176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.521192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:55.075 [2024-12-16 21:36:44.521203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:55.075 [2024-12-16 21:36:44.521209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.525912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.525939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:55.075 [2024-12-16 21:36:44.525948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:30:55.075 [2024-12-16 21:36:44.525954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.526013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.526021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:55.075 [2024-12-16 21:36:44.526028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:55.075 [2024-12-16 21:36:44.526034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.526069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.526076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:55.075 [2024-12-16 21:36:44.526085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:55.075 [2024-12-16 21:36:44.526091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.526111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:55.075 [2024-12-16 21:36:44.527374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.527403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:55.075 [2024-12-16 21:36:44.527411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:30:55.075 [2024-12-16 21:36:44.527418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.527444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.527452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:55.075 [2024-12-16 21:36:44.527458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:55.075 [2024-12-16 21:36:44.527466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.527479] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:55.075 [2024-12-16 21:36:44.527589] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:55.075 [2024-12-16 21:36:44.527598] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:55.075 [2024-12-16 21:36:44.527608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:55.075 [2024-12-16 21:36:44.527616] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:55.075 [2024-12-16 21:36:44.527645] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:55.075 [2024-12-16 21:36:44.527652] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:55.075 [2024-12-16 21:36:44.527661] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:55.075 [2024-12-16 21:36:44.527667] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:55.075 [2024-12-16 21:36:44.527674] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:55.075 [2024-12-16 21:36:44.527682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.527689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:55.075 [2024-12-16 21:36:44.527696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:30:55.075 [2024-12-16 21:36:44.527706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.527771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.075 [2024-12-16 21:36:44.527783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:55.075 [2024-12-16 21:36:44.527788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:55.075 [2024-12-16 21:36:44.527797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.075 [2024-12-16 21:36:44.527869] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:55.075 [2024-12-16 21:36:44.527879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:55.076 [2024-12-16 21:36:44.527886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:55.076 [2024-12-16 21:36:44.527893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.076 [2024-12-16 21:36:44.527899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:55.076 [2024-12-16 21:36:44.527906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:55.076 [2024-12-16 21:36:44.527911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:55.076 [2024-12-16 21:36:44.527918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:55.076 [2024-12-16 21:36:44.527923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:55.076 [2024-12-16 21:36:44.527929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:55.076 [2024-12-16 21:36:44.527934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:55.076 [2024-12-16 21:36:44.527940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:55.076 [2024-12-16 21:36:44.527945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:55.076 [2024-12-16 21:36:44.527953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:55.076 [2024-12-16 21:36:44.527959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:55.076 [2024-12-16 21:36:44.527965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.076 [2024-12-16 21:36:44.527974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:55.076 [2024-12-16 21:36:44.527980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:55.076 [2024-12-16 21:36:44.527985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.076 [2024-12-16 21:36:44.527992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:55.076 [2024-12-16 21:36:44.527997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.076 [2024-12-16 21:36:44.528008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:55.076 [2024-12-16 21:36:44.528015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.076 [2024-12-16 21:36:44.528027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:55.076 [2024-12-16 21:36:44.528032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.076 [2024-12-16 21:36:44.528045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:55.076 [2024-12-16 21:36:44.528054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.076 [2024-12-16 21:36:44.528067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:55.076 [2024-12-16 21:36:44.528073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:55.076 [2024-12-16 21:36:44.528085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:55.076 [2024-12-16 21:36:44.528092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:55.076 [2024-12-16 21:36:44.528098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:55.076 [2024-12-16 21:36:44.528105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:55.076 [2024-12-16 21:36:44.528111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:55.076 [2024-12-16 21:36:44.528118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:55.076 [2024-12-16 21:36:44.528131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:55.076 [2024-12-16 21:36:44.528137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528143] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:55.076 [2024-12-16 21:36:44.528154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:55.076 [2024-12-16 21:36:44.528163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:55.076 [2024-12-16 21:36:44.528170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.076 [2024-12-16 21:36:44.528177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:55.076 [2024-12-16 21:36:44.528184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:55.076 [2024-12-16 21:36:44.528191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:55.076 [2024-12-16 21:36:44.528197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:55.076 [2024-12-16 21:36:44.528204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:55.076 [2024-12-16 21:36:44.528210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:55.076 [2024-12-16 21:36:44.528220] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:55.076 [2024-12-16 21:36:44.528230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.076 [2024-12-16 21:36:44.528239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:55.076 [2024-12-16 21:36:44.528245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:55.076 [2024-12-16 21:36:44.528252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:55.076 [2024-12-16 21:36:44.528259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:55.076 [2024-12-16 21:36:44.528266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:55.076 [2024-12-16 21:36:44.528273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:55.076 [2024-12-16 21:36:44.528282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:55.076 [2024-12-16 21:36:44.528289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:55.076 [2024-12-16 21:36:44.528296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:55.076 [2024-12-16 21:36:44.528302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:55.076 [2024-12-16 21:36:44.528309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:55.076 [2024-12-16 21:36:44.528315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:55.076 [2024-12-16 21:36:44.528323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:55.076 [2024-12-16 21:36:44.528329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:55.076 [2024-12-16 21:36:44.528337] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:55.076 [2024-12-16 21:36:44.528343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.076 [2024-12-16 21:36:44.528352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:55.076 [2024-12-16 21:36:44.528358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:55.076 [2024-12-16 21:36:44.528366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:55.076 [2024-12-16 21:36:44.528372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:55.076 [2024-12-16 21:36:44.528380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.076 [2024-12-16 21:36:44.528386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:55.076 [2024-12-16 21:36:44.528396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:30:55.076 [2024-12-16 21:36:44.528402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.076 [2024-12-16 21:36:44.528431] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:55.076 [2024-12-16 21:36:44.528441] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:58.368 [2024-12-16 21:36:47.981463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:47.981546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:58.368 [2024-12-16 21:36:47.981565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3453.008 ms 00:30:58.368 [2024-12-16 21:36:47.981574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:47.994824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:47.994862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.368 [2024-12-16 21:36:47.994877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.110 ms 00:30:58.368 [2024-12-16 21:36:47.994885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:47.995008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:47.995019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:58.368 [2024-12-16 21:36:47.995029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:58.368 [2024-12-16 21:36:47.995037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:48.003705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:48.003737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.368 [2024-12-16 21:36:48.003749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.617 ms 00:30:58.368 [2024-12-16 21:36:48.003760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:48.003787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:48.003795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.368 [2024-12-16 21:36:48.003808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.368 [2024-12-16 21:36:48.003815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:48.004156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:48.004171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.368 [2024-12-16 21:36:48.004181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:30:58.368 [2024-12-16 21:36:48.004188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:48.004301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:48.004309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.368 [2024-12-16 21:36:48.004320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:30:58.368 [2024-12-16 21:36:48.004328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:48.009816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:48.009846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.368 [2024-12-16 21:36:48.009857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.466 ms 00:30:58.368 [2024-12-16 21:36:48.009864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.368 [2024-12-16 21:36:48.028583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:58.368 [2024-12-16 21:36:48.031525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.368 [2024-12-16 21:36:48.031559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:58.368 [2024-12-16 21:36:48.031571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.596 ms 00:30:58.368 [2024-12-16 21:36:48.031580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.629 [2024-12-16 21:36:48.096941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.629 [2024-12-16 21:36:48.096988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:58.630 [2024-12-16 21:36:48.097002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.329 ms 00:30:58.630 [2024-12-16 21:36:48.097015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.097206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.097220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:58.630 [2024-12-16 21:36:48.097229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:30:58.630 [2024-12-16 21:36:48.097238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.101834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.101883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:58.630 [2024-12-16 21:36:48.101897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.565 ms 00:30:58.630 [2024-12-16 21:36:48.101907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.106009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.106048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:58.630 [2024-12-16 21:36:48.106059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.064 ms 00:30:58.630 [2024-12-16 21:36:48.106068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.106366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.106378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:58.630 [2024-12-16 21:36:48.106387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:30:58.630 [2024-12-16 21:36:48.106398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.145503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.145555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:58.630 [2024-12-16 21:36:48.145570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.086 ms 00:30:58.630 [2024-12-16 21:36:48.145580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.151836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.151888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:58.630 [2024-12-16 21:36:48.151904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:30:58.630 [2024-12-16 21:36:48.151914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.156940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.157109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:58.630 [2024-12-16 21:36:48.157127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.984 ms 00:30:58.630 [2024-12-16 21:36:48.157136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.162683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.162733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:58.630 [2024-12-16 21:36:48.162743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.498 ms 00:30:58.630 [2024-12-16 21:36:48.162755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.162801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.162813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:58.630 [2024-12-16 21:36:48.162822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:58.630 [2024-12-16 21:36:48.162832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.162910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.630 [2024-12-16 21:36:48.162922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:58.630 [2024-12-16 21:36:48.162930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:58.630 [2024-12-16 21:36:48.162943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.630 [2024-12-16 21:36:48.163934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3643.630 ms, result 0 00:30:58.630 { 00:30:58.630 "name": "ftl0", 00:30:58.630 "uuid": "e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5" 00:30:58.630 } 00:30:58.630 21:36:48 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:58.630 21:36:48 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:58.890 21:36:48 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:58.890 21:36:48 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:58.890 [2024-12-16 21:36:48.586532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.890 [2024-12-16 21:36:48.586574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:58.890 [2024-12-16 21:36:48.586591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.890 [2024-12-16 21:36:48.586598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.890 [2024-12-16 21:36:48.586623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:58.890 [2024-12-16 21:36:48.587116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.890 [2024-12-16 21:36:48.587138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:58.890 [2024-12-16 21:36:48.587146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:30:58.890 [2024-12-16 21:36:48.587161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.890 [2024-12-16 21:36:48.587410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.890 [2024-12-16 21:36:48.587430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:58.890 [2024-12-16 21:36:48.587439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:30:58.890 [2024-12-16 21:36:48.587452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.890 [2024-12-16 21:36:48.590695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.890 [2024-12-16 21:36:48.590716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:58.890 [2024-12-16 21:36:48.590726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:30:58.890 [2024-12-16 21:36:48.590736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.596833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.596863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:59.153 [2024-12-16 21:36:48.596877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.080 ms 00:30:59.153 [2024-12-16 21:36:48.596888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.599083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.599120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:59.153 [2024-12-16 21:36:48.599129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:30:59.153 [2024-12-16 21:36:48.599138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.604253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.604290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:59.153 [2024-12-16 21:36:48.604300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.084 ms 00:30:59.153 [2024-12-16 21:36:48.604309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.604427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.604438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:59.153 [2024-12-16 21:36:48.604448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:30:59.153 [2024-12-16 21:36:48.604457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.607077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.607112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:59.153 [2024-12-16 21:36:48.607121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:30:59.153 [2024-12-16 21:36:48.607129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.609256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.609290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:59.153 [2024-12-16 21:36:48.609299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:30:59.153 [2024-12-16 21:36:48.609307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.610919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.610951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:59.153 [2024-12-16 21:36:48.610960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:30:59.153 [2024-12-16 21:36:48.610969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.612117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.153 [2024-12-16 21:36:48.612151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:59.153 [2024-12-16 21:36:48.612159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:30:59.153 [2024-12-16 21:36:48.612167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.153 [2024-12-16 21:36:48.612196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:59.153 [2024-12-16 21:36:48.612210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:59.153 [2024-12-16 21:36:48.612741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.612998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.613006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.613013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.613022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.613029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.613038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.613045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:59.154 [2024-12-16 21:36:48.613064] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:59.154 [2024-12-16 21:36:48.613071] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5 00:30:59.154 [2024-12-16 21:36:48.613081] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:59.154 [2024-12-16 21:36:48.613088] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:59.154 [2024-12-16 21:36:48.613096] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:59.154 [2024-12-16 21:36:48.613103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:59.154 [2024-12-16 21:36:48.613111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:59.154 [2024-12-16 21:36:48.613120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:59.154 [2024-12-16 21:36:48.613129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:59.154 [2024-12-16 21:36:48.613135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:59.154 [2024-12-16 21:36:48.613152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:59.154 [2024-12-16 21:36:48.613159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.154 [2024-12-16 21:36:48.613168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:59.154 [2024-12-16 21:36:48.613176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:30:59.154 [2024-12-16 21:36:48.613184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.614804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.154 [2024-12-16 21:36:48.614903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:59.154 [2024-12-16 21:36:48.614955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:30:59.154 [2024-12-16 21:36:48.614983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.615091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.154 [2024-12-16 21:36:48.615120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:59.154 [2024-12-16 21:36:48.615175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:59.154 [2024-12-16 21:36:48.615199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.620358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.620473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.154 [2024-12-16 21:36:48.620525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.620554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.620621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.620659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.154 [2024-12-16 21:36:48.620679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.620733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.620821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.620911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.154 [2024-12-16 21:36:48.620934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.620976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.621011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.621034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.154 [2024-12-16 21:36:48.621053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.621073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.630264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.630403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.154 [2024-12-16 21:36:48.630453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.630480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.638127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.638253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.154 [2024-12-16 21:36:48.638302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.638326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.638384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.638412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.154 [2024-12-16 21:36:48.638436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.638457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.638521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.638547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.154 [2024-12-16 21:36:48.638568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.638662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.638754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.638883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.154 [2024-12-16 21:36:48.638967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.638992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.639044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.639072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:59.154 [2024-12-16 21:36:48.639091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.639111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.639159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.639184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.154 [2024-12-16 21:36:48.639202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.154 [2024-12-16 21:36:48.639222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.154 [2024-12-16 21:36:48.639310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.154 [2024-12-16 21:36:48.639338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.155 [2024-12-16 21:36:48.639357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.155 [2024-12-16 21:36:48.639368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.155 [2024-12-16 21:36:48.639496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.931 ms, result 0 00:30:59.155 true 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 97094 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 97094 ']' 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 97094 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97094 00:30:59.155 killing process with pid 97094 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97094' 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 97094 00:30:59.155 21:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 97094 00:31:04.448 21:36:53 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:06.982 262144+0 records in 00:31:06.982 262144+0 records out 00:31:06.982 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.53226 s, 304 MB/s 00:31:06.982 21:36:56 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:09.530 21:36:58 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:09.530 [2024-12-16 21:36:58.762019] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:09.530 [2024-12-16 21:36:58.762140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97294 ] 00:31:09.530 [2024-12-16 21:36:58.907648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.530 [2024-12-16 21:36:58.935683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.530 [2024-12-16 21:36:59.049894] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:09.530 [2024-12-16 21:36:59.049958] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:09.530 [2024-12-16 21:36:59.205482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.530 [2024-12-16 21:36:59.205540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:09.530 [2024-12-16 21:36:59.205558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:09.530 [2024-12-16 21:36:59.205566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.530 [2024-12-16 21:36:59.205619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.530 [2024-12-16 21:36:59.205661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:09.530 [2024-12-16 21:36:59.205670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:09.530 [2024-12-16 21:36:59.205683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.530 [2024-12-16 21:36:59.205709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:09.530 [2024-12-16 21:36:59.206242] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:09.530 [2024-12-16 21:36:59.206288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.530 [2024-12-16 21:36:59.206301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:09.530 [2024-12-16 21:36:59.206314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:31:09.530 [2024-12-16 21:36:59.206322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.530 [2024-12-16 21:36:59.207936] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:09.530 [2024-12-16 21:36:59.211658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.530 [2024-12-16 21:36:59.211707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:09.530 [2024-12-16 21:36:59.211718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:31:09.531 [2024-12-16 21:36:59.211734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.211813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.211827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:09.531 [2024-12-16 21:36:59.211838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:09.531 [2024-12-16 21:36:59.211846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.219875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.219917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:09.531 [2024-12-16 21:36:59.219933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.978 ms 00:31:09.531 [2024-12-16 21:36:59.219941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.220043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.220053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:09.531 [2024-12-16 21:36:59.220062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:09.531 [2024-12-16 21:36:59.220070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.220130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.220140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:09.531 [2024-12-16 21:36:59.220148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:09.531 [2024-12-16 21:36:59.220160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.220186] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:09.531 [2024-12-16 21:36:59.222219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.222414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:09.531 [2024-12-16 21:36:59.222439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:31:09.531 [2024-12-16 21:36:59.222447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.222507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.222515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:09.531 [2024-12-16 21:36:59.222524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:09.531 [2024-12-16 21:36:59.222540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.222578] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:09.531 [2024-12-16 21:36:59.222600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:09.531 [2024-12-16 21:36:59.222668] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:09.531 [2024-12-16 21:36:59.222686] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:09.531 [2024-12-16 21:36:59.222790] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:09.531 [2024-12-16 21:36:59.222805] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:09.531 [2024-12-16 21:36:59.222818] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:09.531 [2024-12-16 21:36:59.222830] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:09.531 [2024-12-16 21:36:59.222839] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:09.531 [2024-12-16 21:36:59.222847] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:09.531 [2024-12-16 21:36:59.222854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:09.531 [2024-12-16 21:36:59.222862] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:09.531 [2024-12-16 21:36:59.222872] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:09.531 [2024-12-16 21:36:59.222881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.222893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:09.531 [2024-12-16 21:36:59.222901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:31:09.531 [2024-12-16 21:36:59.222917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.223021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.531 [2024-12-16 21:36:59.223036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:09.531 [2024-12-16 21:36:59.223045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:09.531 [2024-12-16 21:36:59.223057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.531 [2024-12-16 21:36:59.223172] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:09.531 [2024-12-16 21:36:59.223183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:09.531 [2024-12-16 21:36:59.223193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:09.531 [2024-12-16 21:36:59.223220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:09.531 [2024-12-16 21:36:59.223245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:09.531 [2024-12-16 21:36:59.223262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:09.531 [2024-12-16 21:36:59.223272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:09.531 [2024-12-16 21:36:59.223280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:09.531 [2024-12-16 21:36:59.223289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:09.531 [2024-12-16 21:36:59.223297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:09.531 [2024-12-16 21:36:59.223305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:09.531 [2024-12-16 21:36:59.223318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:09.531 [2024-12-16 21:36:59.223338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:09.531 [2024-12-16 21:36:59.223357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:09.531 [2024-12-16 21:36:59.223378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:09.531 [2024-12-16 21:36:59.223405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:09.531 [2024-12-16 21:36:59.223425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:09.531 [2024-12-16 21:36:59.223438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:09.531 [2024-12-16 21:36:59.223444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:09.531 [2024-12-16 21:36:59.223451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:09.531 [2024-12-16 21:36:59.223458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:09.531 [2024-12-16 21:36:59.223464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:09.531 [2024-12-16 21:36:59.223470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:09.531 [2024-12-16 21:36:59.223484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:09.531 [2024-12-16 21:36:59.223492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223503] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:09.531 [2024-12-16 21:36:59.223516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:09.531 [2024-12-16 21:36:59.223524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.531 [2024-12-16 21:36:59.223541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:09.531 [2024-12-16 21:36:59.223548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:09.531 [2024-12-16 21:36:59.223555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:09.531 [2024-12-16 21:36:59.223561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:09.531 [2024-12-16 21:36:59.223569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:09.531 [2024-12-16 21:36:59.223576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:09.531 [2024-12-16 21:36:59.223584] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:09.531 [2024-12-16 21:36:59.223593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.531 [2024-12-16 21:36:59.223601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:09.531 [2024-12-16 21:36:59.223608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:09.531 [2024-12-16 21:36:59.223615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:09.531 [2024-12-16 21:36:59.223622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:09.531 [2024-12-16 21:36:59.223646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:09.532 [2024-12-16 21:36:59.223655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:09.532 [2024-12-16 21:36:59.223662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:09.532 [2024-12-16 21:36:59.223669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:09.532 [2024-12-16 21:36:59.223676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:09.532 [2024-12-16 21:36:59.223689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:09.532 [2024-12-16 21:36:59.223697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:09.532 [2024-12-16 21:36:59.223704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:09.532 [2024-12-16 21:36:59.223712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:09.532 [2024-12-16 21:36:59.223719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:09.532 [2024-12-16 21:36:59.223726] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:09.532 [2024-12-16 21:36:59.223735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.532 [2024-12-16 21:36:59.223744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:09.532 [2024-12-16 21:36:59.223752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:09.532 [2024-12-16 21:36:59.223760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:09.532 [2024-12-16 21:36:59.223769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:09.532 [2024-12-16 21:36:59.223778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.532 [2024-12-16 21:36:59.223789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:09.532 [2024-12-16 21:36:59.223797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:31:09.532 [2024-12-16 21:36:59.223813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.238489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.794 [2024-12-16 21:36:59.238684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.794 [2024-12-16 21:36:59.238745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.628 ms 00:31:09.794 [2024-12-16 21:36:59.238769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.238881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.794 [2024-12-16 21:36:59.238903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:09.794 [2024-12-16 21:36:59.238923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:09.794 [2024-12-16 21:36:59.238942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.257676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.794 [2024-12-16 21:36:59.257857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.794 [2024-12-16 21:36:59.257929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.654 ms 00:31:09.794 [2024-12-16 21:36:59.257953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.258014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.794 [2024-12-16 21:36:59.258038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:09.794 [2024-12-16 21:36:59.258059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:09.794 [2024-12-16 21:36:59.258078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.258613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.794 [2024-12-16 21:36:59.258762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:09.794 [2024-12-16 21:36:59.258921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:31:09.794 [2024-12-16 21:36:59.258962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.259125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.794 [2024-12-16 21:36:59.259455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:09.794 [2024-12-16 21:36:59.259513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:31:09.794 [2024-12-16 21:36:59.259536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.267150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.794 [2024-12-16 21:36:59.267307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:09.794 [2024-12-16 21:36:59.267367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.491 ms 00:31:09.794 [2024-12-16 21:36:59.267391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.794 [2024-12-16 21:36:59.270852] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:09.794 [2024-12-16 21:36:59.271025] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:09.794 [2024-12-16 21:36:59.271094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.271116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:09.795 [2024-12-16 21:36:59.271136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:31:09.795 [2024-12-16 21:36:59.271155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.287253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.287410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:09.795 [2024-12-16 21:36:59.287468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.040 ms 00:31:09.795 [2024-12-16 21:36:59.287491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.290325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.290483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:09.795 [2024-12-16 21:36:59.290542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:31:09.795 [2024-12-16 21:36:59.290565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.293293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.293446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:09.795 [2024-12-16 21:36:59.293506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:31:09.795 [2024-12-16 21:36:59.293528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.294186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.294288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:09.795 [2024-12-16 21:36:59.294408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:31:09.795 [2024-12-16 21:36:59.294436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.320955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.321152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:09.795 [2024-12-16 21:36:59.321220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.481 ms 00:31:09.795 [2024-12-16 21:36:59.321243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.329290] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:09.795 [2024-12-16 21:36:59.332344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.332489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:09.795 [2024-12-16 21:36:59.332554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.050 ms 00:31:09.795 [2024-12-16 21:36:59.332577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.332689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.332769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:09.795 [2024-12-16 21:36:59.332790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:09.795 [2024-12-16 21:36:59.332860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.332952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.333022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:09.795 [2024-12-16 21:36:59.333036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:09.795 [2024-12-16 21:36:59.333049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.333076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.333085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:09.795 [2024-12-16 21:36:59.333094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:09.795 [2024-12-16 21:36:59.333105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.333141] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:09.795 [2024-12-16 21:36:59.333163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.333171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:09.795 [2024-12-16 21:36:59.333184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:09.795 [2024-12-16 21:36:59.333192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.338273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.338322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:09.795 [2024-12-16 21:36:59.338334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:31:09.795 [2024-12-16 21:36:59.338343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.338421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.795 [2024-12-16 21:36:59.338431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:09.795 [2024-12-16 21:36:59.338447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:09.795 [2024-12-16 21:36:59.338456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.795 [2024-12-16 21:36:59.339554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.630 ms, result 0 00:31:10.739  [2024-12-16T21:37:01.376Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-16T21:37:02.753Z] Copying: 41/1024 [MB] (25 MBps) [2024-12-16T21:37:03.692Z] Copying: 74/1024 [MB] (33 MBps) [2024-12-16T21:37:04.626Z] Copying: 92/1024 [MB] (17 MBps) [2024-12-16T21:37:05.565Z] Copying: 121/1024 [MB] (29 MBps) [2024-12-16T21:37:06.554Z] Copying: 150/1024 [MB] (29 MBps) [2024-12-16T21:37:07.492Z] Copying: 169/1024 [MB] (18 MBps) [2024-12-16T21:37:08.435Z] Copying: 192/1024 [MB] (22 MBps) [2024-12-16T21:37:09.378Z] Copying: 207/1024 [MB] (15 MBps) [2024-12-16T21:37:10.760Z] Copying: 221/1024 [MB] (14 MBps) [2024-12-16T21:37:11.703Z] Copying: 243/1024 [MB] (21 MBps) [2024-12-16T21:37:12.644Z] Copying: 265/1024 [MB] (21 MBps) [2024-12-16T21:37:13.587Z] Copying: 280/1024 [MB] (15 MBps) [2024-12-16T21:37:14.531Z] Copying: 303/1024 [MB] (22 MBps) [2024-12-16T21:37:15.465Z] Copying: 329/1024 [MB] (25 MBps) [2024-12-16T21:37:16.398Z] Copying: 357/1024 [MB] (28 MBps) [2024-12-16T21:37:17.781Z] Copying: 380/1024 [MB] (22 MBps) [2024-12-16T21:37:18.348Z] Copying: 397/1024 [MB] (16 MBps) [2024-12-16T21:37:19.723Z] Copying: 415/1024 [MB] (18 MBps) [2024-12-16T21:37:20.657Z] Copying: 427/1024 [MB] (11 MBps) [2024-12-16T21:37:21.590Z] Copying: 440/1024 [MB] (13 MBps) [2024-12-16T21:37:22.524Z] Copying: 461/1024 [MB] (21 MBps) [2024-12-16T21:37:23.461Z] Copying: 479/1024 [MB] (18 MBps) [2024-12-16T21:37:24.399Z] Copying: 497/1024 [MB] (17 MBps) [2024-12-16T21:37:25.779Z] Copying: 513/1024 [MB] (16 MBps) [2024-12-16T21:37:26.713Z] Copying: 533/1024 [MB] (20 MBps) [2024-12-16T21:37:27.647Z] Copying: 553/1024 [MB] (19 MBps) [2024-12-16T21:37:28.586Z] Copying: 564/1024 [MB] (11 MBps) [2024-12-16T21:37:29.526Z] Copying: 577/1024 [MB] (12 MBps) [2024-12-16T21:37:30.465Z] Copying: 589/1024 [MB] (11 MBps) [2024-12-16T21:37:31.401Z] Copying: 600/1024 [MB] (11 MBps) [2024-12-16T21:37:32.375Z] Copying: 612/1024 [MB] (11 MBps) [2024-12-16T21:37:33.749Z] Copying: 629/1024 [MB] (17 MBps) [2024-12-16T21:37:34.684Z] Copying: 644/1024 [MB] (14 MBps) [2024-12-16T21:37:35.619Z] Copying: 662/1024 [MB] (17 MBps) [2024-12-16T21:37:36.557Z] Copying: 674/1024 [MB] (12 MBps) [2024-12-16T21:37:37.491Z] Copying: 685/1024 [MB] (10 MBps) [2024-12-16T21:37:38.426Z] Copying: 697/1024 [MB] (11 MBps) [2024-12-16T21:37:39.360Z] Copying: 709/1024 [MB] (12 MBps) [2024-12-16T21:37:40.735Z] Copying: 721/1024 [MB] (12 MBps) [2024-12-16T21:37:41.668Z] Copying: 733/1024 [MB] (12 MBps) [2024-12-16T21:37:42.602Z] Copying: 749/1024 [MB] (15 MBps) [2024-12-16T21:37:43.536Z] Copying: 761/1024 [MB] (11 MBps) [2024-12-16T21:37:44.470Z] Copying: 776/1024 [MB] (14 MBps) [2024-12-16T21:37:45.403Z] Copying: 788/1024 [MB] (11 MBps) [2024-12-16T21:37:46.777Z] Copying: 802/1024 [MB] (14 MBps) [2024-12-16T21:37:47.711Z] Copying: 820/1024 [MB] (17 MBps) [2024-12-16T21:37:48.646Z] Copying: 832/1024 [MB] (12 MBps) [2024-12-16T21:37:49.581Z] Copying: 844/1024 [MB] (11 MBps) [2024-12-16T21:37:50.515Z] Copying: 855/1024 [MB] (11 MBps) [2024-12-16T21:37:51.450Z] Copying: 867/1024 [MB] (11 MBps) [2024-12-16T21:37:52.384Z] Copying: 878/1024 [MB] (11 MBps) [2024-12-16T21:37:53.758Z] Copying: 890/1024 [MB] (11 MBps) [2024-12-16T21:37:54.696Z] Copying: 901/1024 [MB] (11 MBps) [2024-12-16T21:37:55.634Z] Copying: 916/1024 [MB] (14 MBps) [2024-12-16T21:37:56.568Z] Copying: 929/1024 [MB] (12 MBps) [2024-12-16T21:37:57.502Z] Copying: 943/1024 [MB] (14 MBps) [2024-12-16T21:37:58.533Z] Copying: 958/1024 [MB] (15 MBps) [2024-12-16T21:37:59.473Z] Copying: 971/1024 [MB] (13 MBps) [2024-12-16T21:38:00.411Z] Copying: 983/1024 [MB] (11 MBps) [2024-12-16T21:38:01.350Z] Copying: 1005/1024 [MB] (21 MBps) [2024-12-16T21:38:01.610Z] Copying: 1020/1024 [MB] (15 MBps) [2024-12-16T21:38:01.610Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 21:38:01.553968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.910 [2024-12-16 21:38:01.554004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:11.910 [2024-12-16 21:38:01.554015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:11.910 [2024-12-16 21:38:01.554025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.910 [2024-12-16 21:38:01.554041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:11.910 [2024-12-16 21:38:01.554422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.910 [2024-12-16 21:38:01.554436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:11.910 [2024-12-16 21:38:01.554443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:32:11.910 [2024-12-16 21:38:01.554455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.910 [2024-12-16 21:38:01.555972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.910 [2024-12-16 21:38:01.556074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:11.910 [2024-12-16 21:38:01.556087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:32:11.910 [2024-12-16 21:38:01.556093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.910 [2024-12-16 21:38:01.556119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.910 [2024-12-16 21:38:01.556125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:11.910 [2024-12-16 21:38:01.556132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:11.910 [2024-12-16 21:38:01.556138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.910 [2024-12-16 21:38:01.556173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.910 [2024-12-16 21:38:01.556183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:11.910 [2024-12-16 21:38:01.556189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:11.910 [2024-12-16 21:38:01.556195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.910 [2024-12-16 21:38:01.556205] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:11.910 [2024-12-16 21:38:01.556216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:11.910 [2024-12-16 21:38:01.556591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:11.911 [2024-12-16 21:38:01.556810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:11.911 [2024-12-16 21:38:01.556816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5 00:32:11.911 [2024-12-16 21:38:01.556822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:11.911 [2024-12-16 21:38:01.556827] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:11.911 [2024-12-16 21:38:01.556833] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:11.911 [2024-12-16 21:38:01.556839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:11.911 [2024-12-16 21:38:01.556844] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:11.911 [2024-12-16 21:38:01.556849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:11.911 [2024-12-16 21:38:01.556855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:11.911 [2024-12-16 21:38:01.556860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:11.911 [2024-12-16 21:38:01.556865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:11.911 [2024-12-16 21:38:01.556870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.911 [2024-12-16 21:38:01.556879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:11.911 [2024-12-16 21:38:01.556885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:32:11.911 [2024-12-16 21:38:01.556892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.558075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.911 [2024-12-16 21:38:01.558090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:11.911 [2024-12-16 21:38:01.558099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:32:11.911 [2024-12-16 21:38:01.558107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.558184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.911 [2024-12-16 21:38:01.558193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:11.911 [2024-12-16 21:38:01.558199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:11.911 [2024-12-16 21:38:01.558205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.562205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.562225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.911 [2024-12-16 21:38:01.562232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.562237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.562276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.562285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.911 [2024-12-16 21:38:01.562291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.562296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.562317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.562324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.911 [2024-12-16 21:38:01.562330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.562336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.562347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.562356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.911 [2024-12-16 21:38:01.562363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.562369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.569857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.569968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.911 [2024-12-16 21:38:01.569981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.569987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.575835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.575863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:11.911 [2024-12-16 21:38:01.575875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.575886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.575917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.575924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:11.911 [2024-12-16 21:38:01.575930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.575936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.575955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.575962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:11.911 [2024-12-16 21:38:01.575967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.575975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.576011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.576018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:11.911 [2024-12-16 21:38:01.576024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.576030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.576046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.576053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:11.911 [2024-12-16 21:38:01.576059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.576065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.576094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.576100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:11.911 [2024-12-16 21:38:01.576106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.576112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.576145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.911 [2024-12-16 21:38:01.576153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:11.911 [2024-12-16 21:38:01.576158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.911 [2024-12-16 21:38:01.576164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.911 [2024-12-16 21:38:01.576252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 22.261 ms, result 0 00:32:12.171 00:32:12.171 00:32:12.171 21:38:01 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:12.171 [2024-12-16 21:38:01.784165] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:12.171 [2024-12-16 21:38:01.784282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97950 ] 00:32:12.429 [2024-12-16 21:38:01.926588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.429 [2024-12-16 21:38:01.943413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.429 [2024-12-16 21:38:02.024323] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:12.429 [2024-12-16 21:38:02.024380] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:12.688 [2024-12-16 21:38:02.172393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.172525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:12.689 [2024-12-16 21:38:02.172540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.689 [2024-12-16 21:38:02.172547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.172596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.172604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:12.689 [2024-12-16 21:38:02.172611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:12.689 [2024-12-16 21:38:02.172623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.172659] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:12.689 [2024-12-16 21:38:02.172835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:12.689 [2024-12-16 21:38:02.172846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.172854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:12.689 [2024-12-16 21:38:02.172863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:32:12.689 [2024-12-16 21:38:02.172872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.173040] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:12.689 [2024-12-16 21:38:02.173056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.173062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:12.689 [2024-12-16 21:38:02.173068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:12.689 [2024-12-16 21:38:02.173079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.173117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.173124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:12.689 [2024-12-16 21:38:02.173131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:12.689 [2024-12-16 21:38:02.173138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.173331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.173340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:12.689 [2024-12-16 21:38:02.173347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:32:12.689 [2024-12-16 21:38:02.173356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.173439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.173448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:12.689 [2024-12-16 21:38:02.173455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:12.689 [2024-12-16 21:38:02.173461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.173478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.173488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:12.689 [2024-12-16 21:38:02.173494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:12.689 [2024-12-16 21:38:02.173499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.173512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:12.689 [2024-12-16 21:38:02.174839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.174856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:12.689 [2024-12-16 21:38:02.174865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:32:12.689 [2024-12-16 21:38:02.174870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.174897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.174904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:12.689 [2024-12-16 21:38:02.174910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:12.689 [2024-12-16 21:38:02.174918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.174931] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:12.689 [2024-12-16 21:38:02.174946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:12.689 [2024-12-16 21:38:02.174979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:12.689 [2024-12-16 21:38:02.174994] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:12.689 [2024-12-16 21:38:02.175072] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:12.689 [2024-12-16 21:38:02.175083] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:12.689 [2024-12-16 21:38:02.175091] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:12.689 [2024-12-16 21:38:02.175098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175108] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175115] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:12.689 [2024-12-16 21:38:02.175121] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:12.689 [2024-12-16 21:38:02.175128] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:12.689 [2024-12-16 21:38:02.175134] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:12.689 [2024-12-16 21:38:02.175140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.175146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:12.689 [2024-12-16 21:38:02.175151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:32:12.689 [2024-12-16 21:38:02.175157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.175221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.689 [2024-12-16 21:38:02.175227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:12.689 [2024-12-16 21:38:02.175235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:12.689 [2024-12-16 21:38:02.175242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.689 [2024-12-16 21:38:02.175312] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:12.689 [2024-12-16 21:38:02.175322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:12.689 [2024-12-16 21:38:02.175329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:12.689 [2024-12-16 21:38:02.175347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:12.689 [2024-12-16 21:38:02.175362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:12.689 [2024-12-16 21:38:02.175374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:12.689 [2024-12-16 21:38:02.175380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:12.689 [2024-12-16 21:38:02.175385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:12.689 [2024-12-16 21:38:02.175390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:12.689 [2024-12-16 21:38:02.175395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:12.689 [2024-12-16 21:38:02.175400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:12.689 [2024-12-16 21:38:02.175410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:12.689 [2024-12-16 21:38:02.175426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:12.689 [2024-12-16 21:38:02.175440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:12.689 [2024-12-16 21:38:02.175455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:12.689 [2024-12-16 21:38:02.175469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:12.689 [2024-12-16 21:38:02.175479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:12.689 [2024-12-16 21:38:02.175484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:12.689 [2024-12-16 21:38:02.175495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:12.689 [2024-12-16 21:38:02.175501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:12.689 [2024-12-16 21:38:02.175509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:12.689 [2024-12-16 21:38:02.175515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:12.689 [2024-12-16 21:38:02.175521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:12.689 [2024-12-16 21:38:02.175526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:12.689 [2024-12-16 21:38:02.175538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:12.689 [2024-12-16 21:38:02.175544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.689 [2024-12-16 21:38:02.175550] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:12.690 [2024-12-16 21:38:02.175556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:12.690 [2024-12-16 21:38:02.175563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:12.690 [2024-12-16 21:38:02.175571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:12.690 [2024-12-16 21:38:02.175578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:12.690 [2024-12-16 21:38:02.175583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:12.690 [2024-12-16 21:38:02.175589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:12.690 [2024-12-16 21:38:02.175595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:12.690 [2024-12-16 21:38:02.175600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:12.690 [2024-12-16 21:38:02.175608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:12.690 [2024-12-16 21:38:02.175614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:12.690 [2024-12-16 21:38:02.175622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:12.690 [2024-12-16 21:38:02.175639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:12.690 [2024-12-16 21:38:02.175646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:12.690 [2024-12-16 21:38:02.175652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:12.690 [2024-12-16 21:38:02.175658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:12.690 [2024-12-16 21:38:02.175664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:12.690 [2024-12-16 21:38:02.175670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:12.690 [2024-12-16 21:38:02.175676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:12.690 [2024-12-16 21:38:02.175682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:12.690 [2024-12-16 21:38:02.175688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:12.690 [2024-12-16 21:38:02.175694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:12.690 [2024-12-16 21:38:02.175700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:12.690 [2024-12-16 21:38:02.175710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:12.690 [2024-12-16 21:38:02.175716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:12.690 [2024-12-16 21:38:02.175724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:12.690 [2024-12-16 21:38:02.175730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:12.690 [2024-12-16 21:38:02.175740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:12.690 [2024-12-16 21:38:02.175747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:12.690 [2024-12-16 21:38:02.175753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:12.690 [2024-12-16 21:38:02.175761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:12.690 [2024-12-16 21:38:02.175767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:12.690 [2024-12-16 21:38:02.175774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.175780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:12.690 [2024-12-16 21:38:02.175787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:32:12.690 [2024-12-16 21:38:02.175795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.181043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.181062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:12.690 [2024-12-16 21:38:02.181069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.218 ms 00:32:12.690 [2024-12-16 21:38:02.181075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.181132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.181139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:12.690 [2024-12-16 21:38:02.181149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:12.690 [2024-12-16 21:38:02.181154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.197235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.197272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:12.690 [2024-12-16 21:38:02.197284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.037 ms 00:32:12.690 [2024-12-16 21:38:02.197291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.197326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.197335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:12.690 [2024-12-16 21:38:02.197343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.690 [2024-12-16 21:38:02.197350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.197442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.197455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:12.690 [2024-12-16 21:38:02.197463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:12.690 [2024-12-16 21:38:02.197473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.197579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.197587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:12.690 [2024-12-16 21:38:02.197595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:32:12.690 [2024-12-16 21:38:02.197604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.202350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.202381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:12.690 [2024-12-16 21:38:02.202394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.729 ms 00:32:12.690 [2024-12-16 21:38:02.202402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.202499] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:12.690 [2024-12-16 21:38:02.202512] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:12.690 [2024-12-16 21:38:02.202520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.202528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:12.690 [2024-12-16 21:38:02.202537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:12.690 [2024-12-16 21:38:02.202546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.214878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.214901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:12.690 [2024-12-16 21:38:02.214908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.318 ms 00:32:12.690 [2024-12-16 21:38:02.214915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.215002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.215009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:12.690 [2024-12-16 21:38:02.215015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:12.690 [2024-12-16 21:38:02.215022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.215053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.215062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:12.690 [2024-12-16 21:38:02.215068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:12.690 [2024-12-16 21:38:02.215074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.215291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.215298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:12.690 [2024-12-16 21:38:02.215304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:32:12.690 [2024-12-16 21:38:02.215310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.215321] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:12.690 [2024-12-16 21:38:02.215328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.215334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:12.690 [2024-12-16 21:38:02.215340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:12.690 [2024-12-16 21:38:02.215349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.221701] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:12.690 [2024-12-16 21:38:02.221795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.221802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:12.690 [2024-12-16 21:38:02.221811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.434 ms 00:32:12.690 [2024-12-16 21:38:02.221816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.223479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.223577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:12.690 [2024-12-16 21:38:02.223592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:32:12.690 [2024-12-16 21:38:02.223597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.223663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.223671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:12.690 [2024-12-16 21:38:02.223678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:12.690 [2024-12-16 21:38:02.223684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.690 [2024-12-16 21:38:02.223701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.690 [2024-12-16 21:38:02.223707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:12.690 [2024-12-16 21:38:02.223713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:12.690 [2024-12-16 21:38:02.223718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.691 [2024-12-16 21:38:02.223744] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:12.691 [2024-12-16 21:38:02.223751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.691 [2024-12-16 21:38:02.223757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:12.691 [2024-12-16 21:38:02.223763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:12.691 [2024-12-16 21:38:02.223771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.691 [2024-12-16 21:38:02.227372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.691 [2024-12-16 21:38:02.227399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:12.691 [2024-12-16 21:38:02.227406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:32:12.691 [2024-12-16 21:38:02.227412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.691 [2024-12-16 21:38:02.227461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.691 [2024-12-16 21:38:02.227468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:12.691 [2024-12-16 21:38:02.227478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:12.691 [2024-12-16 21:38:02.227484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.691 [2024-12-16 21:38:02.228424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 55.744 ms, result 0 00:32:14.071  [2024-12-16T21:38:04.710Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-16T21:38:05.645Z] Copying: 24/1024 [MB] (13 MBps) [2024-12-16T21:38:06.582Z] Copying: 42/1024 [MB] (17 MBps) [2024-12-16T21:38:07.519Z] Copying: 62/1024 [MB] (19 MBps) [2024-12-16T21:38:08.459Z] Copying: 75/1024 [MB] (13 MBps) [2024-12-16T21:38:09.394Z] Copying: 97/1024 [MB] (21 MBps) [2024-12-16T21:38:10.768Z] Copying: 110/1024 [MB] (13 MBps) [2024-12-16T21:38:11.703Z] Copying: 124/1024 [MB] (14 MBps) [2024-12-16T21:38:12.643Z] Copying: 142/1024 [MB] (17 MBps) [2024-12-16T21:38:13.581Z] Copying: 161/1024 [MB] (19 MBps) [2024-12-16T21:38:14.516Z] Copying: 172/1024 [MB] (10 MBps) [2024-12-16T21:38:15.453Z] Copying: 185/1024 [MB] (13 MBps) [2024-12-16T21:38:16.390Z] Copying: 203/1024 [MB] (17 MBps) [2024-12-16T21:38:17.773Z] Copying: 215/1024 [MB] (11 MBps) [2024-12-16T21:38:18.708Z] Copying: 225/1024 [MB] (10 MBps) [2024-12-16T21:38:19.642Z] Copying: 241/1024 [MB] (16 MBps) [2024-12-16T21:38:20.580Z] Copying: 256/1024 [MB] (14 MBps) [2024-12-16T21:38:21.518Z] Copying: 270/1024 [MB] (13 MBps) [2024-12-16T21:38:22.456Z] Copying: 284/1024 [MB] (13 MBps) [2024-12-16T21:38:23.465Z] Copying: 294/1024 [MB] (10 MBps) [2024-12-16T21:38:24.407Z] Copying: 311/1024 [MB] (16 MBps) [2024-12-16T21:38:25.781Z] Copying: 321/1024 [MB] (10 MBps) [2024-12-16T21:38:26.720Z] Copying: 335/1024 [MB] (13 MBps) [2024-12-16T21:38:27.659Z] Copying: 347/1024 [MB] (11 MBps) [2024-12-16T21:38:28.600Z] Copying: 358/1024 [MB] (11 MBps) [2024-12-16T21:38:29.534Z] Copying: 369/1024 [MB] (10 MBps) [2024-12-16T21:38:30.468Z] Copying: 383/1024 [MB] (14 MBps) [2024-12-16T21:38:31.404Z] Copying: 397/1024 [MB] (13 MBps) [2024-12-16T21:38:32.783Z] Copying: 410/1024 [MB] (13 MBps) [2024-12-16T21:38:33.719Z] Copying: 421/1024 [MB] (10 MBps) [2024-12-16T21:38:34.662Z] Copying: 434/1024 [MB] (13 MBps) [2024-12-16T21:38:35.601Z] Copying: 446/1024 [MB] (12 MBps) [2024-12-16T21:38:36.544Z] Copying: 459/1024 [MB] (12 MBps) [2024-12-16T21:38:37.485Z] Copying: 471/1024 [MB] (11 MBps) [2024-12-16T21:38:38.428Z] Copying: 482/1024 [MB] (11 MBps) [2024-12-16T21:38:39.373Z] Copying: 494/1024 [MB] (12 MBps) [2024-12-16T21:38:40.760Z] Copying: 505/1024 [MB] (10 MBps) [2024-12-16T21:38:41.697Z] Copying: 523/1024 [MB] (17 MBps) [2024-12-16T21:38:42.632Z] Copying: 535/1024 [MB] (12 MBps) [2024-12-16T21:38:43.464Z] Copying: 548/1024 [MB] (12 MBps) [2024-12-16T21:38:44.405Z] Copying: 560/1024 [MB] (12 MBps) [2024-12-16T21:38:45.784Z] Copying: 577/1024 [MB] (16 MBps) [2024-12-16T21:38:46.724Z] Copying: 588/1024 [MB] (11 MBps) [2024-12-16T21:38:47.663Z] Copying: 602/1024 [MB] (13 MBps) [2024-12-16T21:38:48.604Z] Copying: 622/1024 [MB] (20 MBps) [2024-12-16T21:38:49.539Z] Copying: 637/1024 [MB] (15 MBps) [2024-12-16T21:38:50.472Z] Copying: 650/1024 [MB] (13 MBps) [2024-12-16T21:38:51.413Z] Copying: 664/1024 [MB] (13 MBps) [2024-12-16T21:38:52.799Z] Copying: 676/1024 [MB] (12 MBps) [2024-12-16T21:38:53.372Z] Copying: 687/1024 [MB] (11 MBps) [2024-12-16T21:38:54.759Z] Copying: 700/1024 [MB] (13 MBps) [2024-12-16T21:38:55.705Z] Copying: 713/1024 [MB] (12 MBps) [2024-12-16T21:38:56.647Z] Copying: 723/1024 [MB] (10 MBps) [2024-12-16T21:38:57.585Z] Copying: 734/1024 [MB] (11 MBps) [2024-12-16T21:38:58.525Z] Copying: 745/1024 [MB] (10 MBps) [2024-12-16T21:38:59.465Z] Copying: 758/1024 [MB] (12 MBps) [2024-12-16T21:39:00.409Z] Copying: 770/1024 [MB] (12 MBps) [2024-12-16T21:39:01.793Z] Copying: 781/1024 [MB] (10 MBps) [2024-12-16T21:39:02.364Z] Copying: 793/1024 [MB] (11 MBps) [2024-12-16T21:39:03.747Z] Copying: 804/1024 [MB] (11 MBps) [2024-12-16T21:39:04.689Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-16T21:39:05.637Z] Copying: 832/1024 [MB] (16 MBps) [2024-12-16T21:39:06.581Z] Copying: 842/1024 [MB] (10 MBps) [2024-12-16T21:39:07.524Z] Copying: 853/1024 [MB] (10 MBps) [2024-12-16T21:39:08.469Z] Copying: 864/1024 [MB] (10 MBps) [2024-12-16T21:39:09.412Z] Copying: 874/1024 [MB] (10 MBps) [2024-12-16T21:39:10.794Z] Copying: 885/1024 [MB] (10 MBps) [2024-12-16T21:39:11.366Z] Copying: 897/1024 [MB] (12 MBps) [2024-12-16T21:39:12.752Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-16T21:39:13.687Z] Copying: 918/1024 [MB] (10 MBps) [2024-12-16T21:39:14.622Z] Copying: 931/1024 [MB] (12 MBps) [2024-12-16T21:39:15.560Z] Copying: 944/1024 [MB] (13 MBps) [2024-12-16T21:39:16.532Z] Copying: 958/1024 [MB] (13 MBps) [2024-12-16T21:39:17.474Z] Copying: 973/1024 [MB] (15 MBps) [2024-12-16T21:39:18.418Z] Copying: 984/1024 [MB] (10 MBps) [2024-12-16T21:39:19.798Z] Copying: 995/1024 [MB] (11 MBps) [2024-12-16T21:39:20.365Z] Copying: 1007/1024 [MB] (11 MBps) [2024-12-16T21:39:20.939Z] Copying: 1019/1024 [MB] (12 MBps) [2024-12-16T21:39:20.939Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 21:39:20.914692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.239 [2024-12-16 21:39:20.915062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:31.239 [2024-12-16 21:39:20.915228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:31.239 [2024-12-16 21:39:20.915268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.239 [2024-12-16 21:39:20.915341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:31.239 [2024-12-16 21:39:20.916178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.239 [2024-12-16 21:39:20.916390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:31.239 [2024-12-16 21:39:20.916910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:33:31.239 [2024-12-16 21:39:20.916984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.239 [2024-12-16 21:39:20.917519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.239 [2024-12-16 21:39:20.917562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:31.239 [2024-12-16 21:39:20.917708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:33:31.239 [2024-12-16 21:39:20.917745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.239 [2024-12-16 21:39:20.917814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.239 [2024-12-16 21:39:20.917994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:31.239 [2024-12-16 21:39:20.918029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:31.239 [2024-12-16 21:39:20.918056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.239 [2024-12-16 21:39:20.918146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.239 [2024-12-16 21:39:20.918293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:31.239 [2024-12-16 21:39:20.918327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:33:31.239 [2024-12-16 21:39:20.918355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.239 [2024-12-16 21:39:20.918393] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:31.239 [2024-12-16 21:39:20.918427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.918985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.919999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:31.239 [2024-12-16 21:39:20.920176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.920328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:31.240 [2024-12-16 21:39:20.921821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:31.240 [2024-12-16 21:39:20.921830] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5 00:33:31.240 [2024-12-16 21:39:20.921839] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:31.240 [2024-12-16 21:39:20.921848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:31.240 [2024-12-16 21:39:20.921855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:31.240 [2024-12-16 21:39:20.921864] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:31.240 [2024-12-16 21:39:20.921876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:31.240 [2024-12-16 21:39:20.921884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:31.240 [2024-12-16 21:39:20.921901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:31.240 [2024-12-16 21:39:20.921908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:31.240 [2024-12-16 21:39:20.921915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:31.240 [2024-12-16 21:39:20.921924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.240 [2024-12-16 21:39:20.921932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:31.240 [2024-12-16 21:39:20.921942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:33:31.240 [2024-12-16 21:39:20.921952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.240 [2024-12-16 21:39:20.924849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.240 [2024-12-16 21:39:20.925018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:31.240 [2024-12-16 21:39:20.925079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:33:31.240 [2024-12-16 21:39:20.925105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.240 [2024-12-16 21:39:20.925261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.240 [2024-12-16 21:39:20.925381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:31.240 [2024-12-16 21:39:20.925462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:33:31.240 [2024-12-16 21:39:20.925486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.240 [2024-12-16 21:39:20.933449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.240 [2024-12-16 21:39:20.933616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:31.240 [2024-12-16 21:39:20.933691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.240 [2024-12-16 21:39:20.933715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.240 [2024-12-16 21:39:20.933794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.240 [2024-12-16 21:39:20.933859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:31.240 [2024-12-16 21:39:20.933892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.240 [2024-12-16 21:39:20.933912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.240 [2024-12-16 21:39:20.934452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.240 [2024-12-16 21:39:20.934482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:31.240 [2024-12-16 21:39:20.934494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.240 [2024-12-16 21:39:20.934504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.240 [2024-12-16 21:39:20.934523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.240 [2024-12-16 21:39:20.934532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:31.240 [2024-12-16 21:39:20.934540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.240 [2024-12-16 21:39:20.934556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.949040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.949254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:31.501 [2024-12-16 21:39:20.949274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.949283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.960654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.960691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:31.501 [2024-12-16 21:39:20.960701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.960718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.960773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.960791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:31.501 [2024-12-16 21:39:20.960800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.960808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.960845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.960854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:31.501 [2024-12-16 21:39:20.960863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.960871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.960932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.960942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:31.501 [2024-12-16 21:39:20.960951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.960959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.960997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.961006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:31.501 [2024-12-16 21:39:20.961014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.961029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.961070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.961079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:31.501 [2024-12-16 21:39:20.961088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.961096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.961140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.501 [2024-12-16 21:39:20.961150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:31.501 [2024-12-16 21:39:20.961158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.501 [2024-12-16 21:39:20.961168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.501 [2024-12-16 21:39:20.961321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 46.602 ms, result 0 00:33:31.501 00:33:31.501 00:33:31.501 21:39:21 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:34.045 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:34.045 21:39:23 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:34.045 [2024-12-16 21:39:23.360323] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:34.045 [2024-12-16 21:39:23.360443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98763 ] 00:33:34.045 [2024-12-16 21:39:23.503840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.045 [2024-12-16 21:39:23.529819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.045 [2024-12-16 21:39:23.646573] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:34.045 [2024-12-16 21:39:23.646673] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:34.307 [2024-12-16 21:39:23.808378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.808624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:34.307 [2024-12-16 21:39:23.808665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:34.307 [2024-12-16 21:39:23.808676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.808743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.808753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:34.307 [2024-12-16 21:39:23.808763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:34.307 [2024-12-16 21:39:23.808778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.808806] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:34.307 [2024-12-16 21:39:23.809066] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:34.307 [2024-12-16 21:39:23.809084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.809096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:34.307 [2024-12-16 21:39:23.809107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:33:34.307 [2024-12-16 21:39:23.809116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.809422] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:34.307 [2024-12-16 21:39:23.809450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.809460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:34.307 [2024-12-16 21:39:23.809471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:34.307 [2024-12-16 21:39:23.809486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.809544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.809555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:34.307 [2024-12-16 21:39:23.809564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:34.307 [2024-12-16 21:39:23.809575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.809854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.809872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:34.307 [2024-12-16 21:39:23.809881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:33:34.307 [2024-12-16 21:39:23.809889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.809971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.809984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:34.307 [2024-12-16 21:39:23.809993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:34.307 [2024-12-16 21:39:23.810001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.810030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.810041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:34.307 [2024-12-16 21:39:23.810050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:34.307 [2024-12-16 21:39:23.810058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.810080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:34.307 [2024-12-16 21:39:23.812175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.812366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:34.307 [2024-12-16 21:39:23.812383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:33:34.307 [2024-12-16 21:39:23.812392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.812437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.812446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:34.307 [2024-12-16 21:39:23.812455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:34.307 [2024-12-16 21:39:23.812463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.812516] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:34.307 [2024-12-16 21:39:23.812544] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:34.307 [2024-12-16 21:39:23.812588] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:34.307 [2024-12-16 21:39:23.812604] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:34.307 [2024-12-16 21:39:23.812731] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:34.307 [2024-12-16 21:39:23.812745] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:34.307 [2024-12-16 21:39:23.812757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:34.307 [2024-12-16 21:39:23.812772] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:34.307 [2024-12-16 21:39:23.812787] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:34.307 [2024-12-16 21:39:23.812801] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:34.307 [2024-12-16 21:39:23.812810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:34.307 [2024-12-16 21:39:23.812822] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:34.307 [2024-12-16 21:39:23.812831] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:34.307 [2024-12-16 21:39:23.812846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.812857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:34.307 [2024-12-16 21:39:23.812869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:33:34.307 [2024-12-16 21:39:23.812876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.307 [2024-12-16 21:39:23.812967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.307 [2024-12-16 21:39:23.812978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:34.308 [2024-12-16 21:39:23.812988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:33:34.308 [2024-12-16 21:39:23.813000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.308 [2024-12-16 21:39:23.813109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:34.308 [2024-12-16 21:39:23.813124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:34.308 [2024-12-16 21:39:23.813136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:34.308 [2024-12-16 21:39:23.813164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:34.308 [2024-12-16 21:39:23.813220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:34.308 [2024-12-16 21:39:23.813237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:34.308 [2024-12-16 21:39:23.813246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:34.308 [2024-12-16 21:39:23.813257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:34.308 [2024-12-16 21:39:23.813265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:34.308 [2024-12-16 21:39:23.813273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:34.308 [2024-12-16 21:39:23.813281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:34.308 [2024-12-16 21:39:23.813295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:34.308 [2024-12-16 21:39:23.813318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:34.308 [2024-12-16 21:39:23.813339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:34.308 [2024-12-16 21:39:23.813361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:34.308 [2024-12-16 21:39:23.813382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:34.308 [2024-12-16 21:39:23.813401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:34.308 [2024-12-16 21:39:23.813419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:34.308 [2024-12-16 21:39:23.813428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:34.308 [2024-12-16 21:39:23.813434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:34.308 [2024-12-16 21:39:23.813441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:34.308 [2024-12-16 21:39:23.813447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:34.308 [2024-12-16 21:39:23.813454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:34.308 [2024-12-16 21:39:23.813466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:34.308 [2024-12-16 21:39:23.813473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813481] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:34.308 [2024-12-16 21:39:23.813494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:34.308 [2024-12-16 21:39:23.813502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.308 [2024-12-16 21:39:23.813520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:34.308 [2024-12-16 21:39:23.813527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:34.308 [2024-12-16 21:39:23.813533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:34.308 [2024-12-16 21:39:23.813542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:34.308 [2024-12-16 21:39:23.813551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:34.308 [2024-12-16 21:39:23.813558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:34.308 [2024-12-16 21:39:23.813567] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:34.308 [2024-12-16 21:39:23.813576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.308 [2024-12-16 21:39:23.813585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:34.308 [2024-12-16 21:39:23.813593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:34.308 [2024-12-16 21:39:23.813600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:34.308 [2024-12-16 21:39:23.813607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:34.308 [2024-12-16 21:39:23.813614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:34.308 [2024-12-16 21:39:23.813621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:34.308 [2024-12-16 21:39:23.813644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:34.308 [2024-12-16 21:39:23.813653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:34.308 [2024-12-16 21:39:23.813661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:34.308 [2024-12-16 21:39:23.813668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:34.308 [2024-12-16 21:39:23.813676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:34.308 [2024-12-16 21:39:23.813691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:34.308 [2024-12-16 21:39:23.813698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:34.308 [2024-12-16 21:39:23.813705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:34.308 [2024-12-16 21:39:23.813712] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:34.308 [2024-12-16 21:39:23.813721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.308 [2024-12-16 21:39:23.813732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:34.308 [2024-12-16 21:39:23.813739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:34.308 [2024-12-16 21:39:23.813747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:34.308 [2024-12-16 21:39:23.813755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:34.308 [2024-12-16 21:39:23.813762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.308 [2024-12-16 21:39:23.813772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:34.308 [2024-12-16 21:39:23.813781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:33:34.308 [2024-12-16 21:39:23.813795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.308 [2024-12-16 21:39:23.823762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.308 [2024-12-16 21:39:23.823808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:34.308 [2024-12-16 21:39:23.823819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.922 ms 00:33:34.308 [2024-12-16 21:39:23.823827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.308 [2024-12-16 21:39:23.823911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.308 [2024-12-16 21:39:23.823922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:34.308 [2024-12-16 21:39:23.823931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:34.308 [2024-12-16 21:39:23.823946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.308 [2024-12-16 21:39:23.846909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.308 [2024-12-16 21:39:23.847147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:34.308 [2024-12-16 21:39:23.847172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.905 ms 00:33:34.308 [2024-12-16 21:39:23.847184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.308 [2024-12-16 21:39:23.847242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.308 [2024-12-16 21:39:23.847263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:34.308 [2024-12-16 21:39:23.847275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:34.308 [2024-12-16 21:39:23.847286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.308 [2024-12-16 21:39:23.847426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.308 [2024-12-16 21:39:23.847450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:34.308 [2024-12-16 21:39:23.847463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:34.309 [2024-12-16 21:39:23.847477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.847663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.847681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:34.309 [2024-12-16 21:39:23.847699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:33:34.309 [2024-12-16 21:39:23.847713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.855940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.855987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:34.309 [2024-12-16 21:39:23.856004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.198 ms 00:33:34.309 [2024-12-16 21:39:23.856012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.856129] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:34.309 [2024-12-16 21:39:23.856143] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:34.309 [2024-12-16 21:39:23.856155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.856164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:34.309 [2024-12-16 21:39:23.856174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:33:34.309 [2024-12-16 21:39:23.856190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.868539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.868580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:34.309 [2024-12-16 21:39:23.868592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.333 ms 00:33:34.309 [2024-12-16 21:39:23.868605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.868763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.868775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:34.309 [2024-12-16 21:39:23.868785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:33:34.309 [2024-12-16 21:39:23.868799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.868850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.868863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:34.309 [2024-12-16 21:39:23.868872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:34.309 [2024-12-16 21:39:23.868880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.869196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.869218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:34.309 [2024-12-16 21:39:23.869227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:33:34.309 [2024-12-16 21:39:23.869239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.869256] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:34.309 [2024-12-16 21:39:23.869266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.869280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:34.309 [2024-12-16 21:39:23.869296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:34.309 [2024-12-16 21:39:23.869304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.878606] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:34.309 [2024-12-16 21:39:23.878773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.878784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:34.309 [2024-12-16 21:39:23.878796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.449 ms 00:33:34.309 [2024-12-16 21:39:23.878810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.881158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.881345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:34.309 [2024-12-16 21:39:23.881363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.323 ms 00:33:34.309 [2024-12-16 21:39:23.881372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.881482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.881493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:34.309 [2024-12-16 21:39:23.881503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:34.309 [2024-12-16 21:39:23.881512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.881540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.881551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:34.309 [2024-12-16 21:39:23.881559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:34.309 [2024-12-16 21:39:23.881566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.881600] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:34.309 [2024-12-16 21:39:23.881613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.881621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:34.309 [2024-12-16 21:39:23.881656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:34.309 [2024-12-16 21:39:23.881664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.887933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.888114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:34.309 [2024-12-16 21:39:23.888134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.243 ms 00:33:34.309 [2024-12-16 21:39:23.888142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.888215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.309 [2024-12-16 21:39:23.888234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:34.309 [2024-12-16 21:39:23.888243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:34.309 [2024-12-16 21:39:23.888255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.309 [2024-12-16 21:39:23.890015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.137 ms, result 0 00:33:35.246  [2024-12-16T21:39:26.332Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-16T21:39:26.901Z] Copying: 21/1024 [MB] (10 MBps) [2024-12-16T21:39:28.279Z] Copying: 33/1024 [MB] (11 MBps) [2024-12-16T21:39:29.219Z] Copying: 44/1024 [MB] (10 MBps) [2024-12-16T21:39:30.160Z] Copying: 56/1024 [MB] (12 MBps) [2024-12-16T21:39:31.093Z] Copying: 67/1024 [MB] (11 MBps) [2024-12-16T21:39:32.027Z] Copying: 79/1024 [MB] (11 MBps) [2024-12-16T21:39:32.961Z] Copying: 92/1024 [MB] (12 MBps) [2024-12-16T21:39:34.335Z] Copying: 104/1024 [MB] (11 MBps) [2024-12-16T21:39:34.901Z] Copying: 116/1024 [MB] (12 MBps) [2024-12-16T21:39:36.274Z] Copying: 128/1024 [MB] (11 MBps) [2024-12-16T21:39:37.208Z] Copying: 140/1024 [MB] (12 MBps) [2024-12-16T21:39:38.144Z] Copying: 152/1024 [MB] (11 MBps) [2024-12-16T21:39:39.085Z] Copying: 164/1024 [MB] (11 MBps) [2024-12-16T21:39:40.059Z] Copying: 174/1024 [MB] (10 MBps) [2024-12-16T21:39:41.001Z] Copying: 185/1024 [MB] (11 MBps) [2024-12-16T21:39:41.938Z] Copying: 196/1024 [MB] (10 MBps) [2024-12-16T21:39:43.320Z] Copying: 208/1024 [MB] (12 MBps) [2024-12-16T21:39:44.255Z] Copying: 219/1024 [MB] (10 MBps) [2024-12-16T21:39:45.192Z] Copying: 232/1024 [MB] (13 MBps) [2024-12-16T21:39:46.128Z] Copying: 245/1024 [MB] (13 MBps) [2024-12-16T21:39:47.070Z] Copying: 258/1024 [MB] (12 MBps) [2024-12-16T21:39:48.005Z] Copying: 269/1024 [MB] (10 MBps) [2024-12-16T21:39:48.947Z] Copying: 281/1024 [MB] (12 MBps) [2024-12-16T21:39:50.321Z] Copying: 293/1024 [MB] (12 MBps) [2024-12-16T21:39:51.261Z] Copying: 305/1024 [MB] (12 MBps) [2024-12-16T21:39:52.201Z] Copying: 317/1024 [MB] (12 MBps) [2024-12-16T21:39:53.136Z] Copying: 327/1024 [MB] (10 MBps) [2024-12-16T21:39:54.070Z] Copying: 339/1024 [MB] (12 MBps) [2024-12-16T21:39:55.007Z] Copying: 351/1024 [MB] (12 MBps) [2024-12-16T21:39:55.953Z] Copying: 363/1024 [MB] (11 MBps) [2024-12-16T21:39:57.331Z] Copying: 382612/1048576 [kB] (10212 kBps) [2024-12-16T21:39:58.268Z] Copying: 384/1024 [MB] (11 MBps) [2024-12-16T21:39:59.201Z] Copying: 395/1024 [MB] (10 MBps) [2024-12-16T21:40:00.140Z] Copying: 407/1024 [MB] (12 MBps) [2024-12-16T21:40:01.076Z] Copying: 419/1024 [MB] (12 MBps) [2024-12-16T21:40:02.013Z] Copying: 430/1024 [MB] (10 MBps) [2024-12-16T21:40:02.946Z] Copying: 440/1024 [MB] (10 MBps) [2024-12-16T21:40:04.318Z] Copying: 452/1024 [MB] (12 MBps) [2024-12-16T21:40:05.256Z] Copying: 465/1024 [MB] (12 MBps) [2024-12-16T21:40:06.189Z] Copying: 477/1024 [MB] (11 MBps) [2024-12-16T21:40:07.203Z] Copying: 489/1024 [MB] (12 MBps) [2024-12-16T21:40:08.142Z] Copying: 500/1024 [MB] (11 MBps) [2024-12-16T21:40:09.084Z] Copying: 512/1024 [MB] (11 MBps) [2024-12-16T21:40:10.025Z] Copying: 522/1024 [MB] (10 MBps) [2024-12-16T21:40:10.961Z] Copying: 533/1024 [MB] (10 MBps) [2024-12-16T21:40:12.341Z] Copying: 544/1024 [MB] (11 MBps) [2024-12-16T21:40:12.905Z] Copying: 556/1024 [MB] (11 MBps) [2024-12-16T21:40:14.281Z] Copying: 566/1024 [MB] (10 MBps) [2024-12-16T21:40:15.218Z] Copying: 578/1024 [MB] (11 MBps) [2024-12-16T21:40:16.152Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-16T21:40:17.088Z] Copying: 602/1024 [MB] (11 MBps) [2024-12-16T21:40:18.024Z] Copying: 614/1024 [MB] (11 MBps) [2024-12-16T21:40:18.960Z] Copying: 625/1024 [MB] (11 MBps) [2024-12-16T21:40:20.336Z] Copying: 637/1024 [MB] (12 MBps) [2024-12-16T21:40:20.903Z] Copying: 649/1024 [MB] (11 MBps) [2024-12-16T21:40:22.284Z] Copying: 661/1024 [MB] (11 MBps) [2024-12-16T21:40:23.222Z] Copying: 673/1024 [MB] (12 MBps) [2024-12-16T21:40:24.164Z] Copying: 684/1024 [MB] (10 MBps) [2024-12-16T21:40:25.111Z] Copying: 704/1024 [MB] (20 MBps) [2024-12-16T21:40:26.052Z] Copying: 714/1024 [MB] (10 MBps) [2024-12-16T21:40:26.989Z] Copying: 725/1024 [MB] (10 MBps) [2024-12-16T21:40:27.930Z] Copying: 738/1024 [MB] (13 MBps) [2024-12-16T21:40:29.307Z] Copying: 752/1024 [MB] (13 MBps) [2024-12-16T21:40:30.242Z] Copying: 770/1024 [MB] (17 MBps) [2024-12-16T21:40:31.179Z] Copying: 783/1024 [MB] (12 MBps) [2024-12-16T21:40:32.181Z] Copying: 796/1024 [MB] (13 MBps) [2024-12-16T21:40:33.116Z] Copying: 806/1024 [MB] (10 MBps) [2024-12-16T21:40:34.051Z] Copying: 820/1024 [MB] (13 MBps) [2024-12-16T21:40:34.985Z] Copying: 833/1024 [MB] (13 MBps) [2024-12-16T21:40:35.929Z] Copying: 850/1024 [MB] (16 MBps) [2024-12-16T21:40:37.306Z] Copying: 864/1024 [MB] (13 MBps) [2024-12-16T21:40:38.250Z] Copying: 879/1024 [MB] (14 MBps) [2024-12-16T21:40:39.190Z] Copying: 893/1024 [MB] (14 MBps) [2024-12-16T21:40:40.127Z] Copying: 904/1024 [MB] (10 MBps) [2024-12-16T21:40:41.070Z] Copying: 920/1024 [MB] (16 MBps) [2024-12-16T21:40:42.009Z] Copying: 933/1024 [MB] (12 MBps) [2024-12-16T21:40:42.943Z] Copying: 945/1024 [MB] (11 MBps) [2024-12-16T21:40:44.330Z] Copying: 958/1024 [MB] (13 MBps) [2024-12-16T21:40:44.902Z] Copying: 972/1024 [MB] (13 MBps) [2024-12-16T21:40:46.282Z] Copying: 987/1024 [MB] (15 MBps) [2024-12-16T21:40:47.226Z] Copying: 1000/1024 [MB] (13 MBps) [2024-12-16T21:40:48.174Z] Copying: 1012/1024 [MB] (12 MBps) [2024-12-16T21:40:49.120Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-16T21:40:49.120Z] Copying: 1048492/1048576 [kB] (880 kBps) [2024-12-16T21:40:49.120Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-16 21:40:49.024326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.420 [2024-12-16 21:40:49.024708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:59.420 [2024-12-16 21:40:49.024740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:59.420 [2024-12-16 21:40:49.024750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.420 [2024-12-16 21:40:49.028523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:59.420 [2024-12-16 21:40:49.031579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.420 [2024-12-16 21:40:49.031669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:59.420 [2024-12-16 21:40:49.031685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:34:59.421 [2024-12-16 21:40:49.031695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.421 [2024-12-16 21:40:49.043803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.421 [2024-12-16 21:40:49.044037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:59.421 [2024-12-16 21:40:49.044064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.694 ms 00:34:59.421 [2024-12-16 21:40:49.044074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.421 [2024-12-16 21:40:49.044116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.421 [2024-12-16 21:40:49.044126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:59.421 [2024-12-16 21:40:49.044136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:59.421 [2024-12-16 21:40:49.044145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.421 [2024-12-16 21:40:49.044212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.421 [2024-12-16 21:40:49.044227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:59.421 [2024-12-16 21:40:49.044235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:59.421 [2024-12-16 21:40:49.044245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.421 [2024-12-16 21:40:49.044260] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:59.421 [2024-12-16 21:40:49.044273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:34:59.421 [2024-12-16 21:40:49.044283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:59.421 [2024-12-16 21:40:49.044935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.044945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.044954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.044962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.044970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.044977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.044985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.044993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:59.422 [2024-12-16 21:40:49.045147] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:59.422 [2024-12-16 21:40:49.045160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5 00:34:59.422 [2024-12-16 21:40:49.045174] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:34:59.422 [2024-12-16 21:40:49.045182] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:34:59.422 [2024-12-16 21:40:49.045190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:34:59.422 [2024-12-16 21:40:49.045197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:34:59.422 [2024-12-16 21:40:49.045225] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:59.422 [2024-12-16 21:40:49.045235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:59.422 [2024-12-16 21:40:49.045243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:59.422 [2024-12-16 21:40:49.045250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:59.422 [2024-12-16 21:40:49.045257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:59.422 [2024-12-16 21:40:49.045266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.422 [2024-12-16 21:40:49.045275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:59.422 [2024-12-16 21:40:49.045283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:34:59.422 [2024-12-16 21:40:49.045295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.047848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.422 [2024-12-16 21:40:49.047886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:59.422 [2024-12-16 21:40:49.047908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.536 ms 00:34:59.422 [2024-12-16 21:40:49.047917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.048048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.422 [2024-12-16 21:40:49.048059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:59.422 [2024-12-16 21:40:49.048069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:34:59.422 [2024-12-16 21:40:49.048079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.055923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.055979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:59.422 [2024-12-16 21:40:49.055990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.055998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.056059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.056068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:59.422 [2024-12-16 21:40:49.056077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.056085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.056139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.056149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:59.422 [2024-12-16 21:40:49.056167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.056177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.056199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.056212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:59.422 [2024-12-16 21:40:49.056221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.056228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.070488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.070544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:59.422 [2024-12-16 21:40:49.070554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.070563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.081551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.081595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:59.422 [2024-12-16 21:40:49.081605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.081613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.081672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.081681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:59.422 [2024-12-16 21:40:49.081690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.081704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.081739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.081747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:59.422 [2024-12-16 21:40:49.081756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.081764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.081819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.081829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:59.422 [2024-12-16 21:40:49.081838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.081846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.081876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.081885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:59.422 [2024-12-16 21:40:49.081894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.081907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.081947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.081957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:59.422 [2024-12-16 21:40:49.081966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.081974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.082022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:59.422 [2024-12-16 21:40:49.082032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:59.422 [2024-12-16 21:40:49.082042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:59.422 [2024-12-16 21:40:49.082050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.422 [2024-12-16 21:40:49.082181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.477 ms, result 0 00:35:00.365 00:35:00.365 00:35:00.365 21:40:49 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:00.365 [2024-12-16 21:40:49.888941] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:00.365 [2024-12-16 21:40:49.889361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99633 ] 00:35:00.365 [2024-12-16 21:40:50.040146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.625 [2024-12-16 21:40:50.072356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.625 [2024-12-16 21:40:50.186573] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:00.625 [2024-12-16 21:40:50.186669] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:00.888 [2024-12-16 21:40:50.348805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.888 [2024-12-16 21:40:50.348876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:00.888 [2024-12-16 21:40:50.348891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:00.889 [2024-12-16 21:40:50.348900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.348964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.348975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:00.889 [2024-12-16 21:40:50.348985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:35:00.889 [2024-12-16 21:40:50.349000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.349030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:00.889 [2024-12-16 21:40:50.349334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:00.889 [2024-12-16 21:40:50.349354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.349362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:00.889 [2024-12-16 21:40:50.349374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:35:00.889 [2024-12-16 21:40:50.349383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.349904] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:00.889 [2024-12-16 21:40:50.349974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.349996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:00.889 [2024-12-16 21:40:50.350018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:35:00.889 [2024-12-16 21:40:50.350046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.350124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.350151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:00.889 [2024-12-16 21:40:50.350173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:00.889 [2024-12-16 21:40:50.350199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.350535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.350747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:00.889 [2024-12-16 21:40:50.350779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:35:00.889 [2024-12-16 21:40:50.350799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.350917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.350943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:00.889 [2024-12-16 21:40:50.350969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:35:00.889 [2024-12-16 21:40:50.350989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.351030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.351061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:00.889 [2024-12-16 21:40:50.351083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:00.889 [2024-12-16 21:40:50.351101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.351135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:00.889 [2024-12-16 21:40:50.353486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.353687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:00.889 [2024-12-16 21:40:50.353764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:35:00.889 [2024-12-16 21:40:50.353776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.353831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.353847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:00.889 [2024-12-16 21:40:50.353857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:00.889 [2024-12-16 21:40:50.353866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.353925] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:00.889 [2024-12-16 21:40:50.353954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:00.889 [2024-12-16 21:40:50.353998] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:00.889 [2024-12-16 21:40:50.354018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:00.889 [2024-12-16 21:40:50.354124] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:00.889 [2024-12-16 21:40:50.354137] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:00.889 [2024-12-16 21:40:50.354150] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:00.889 [2024-12-16 21:40:50.354162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354186] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:00.889 [2024-12-16 21:40:50.354193] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:00.889 [2024-12-16 21:40:50.354202] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:00.889 [2024-12-16 21:40:50.354213] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:00.889 [2024-12-16 21:40:50.354224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.354233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:00.889 [2024-12-16 21:40:50.354241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:35:00.889 [2024-12-16 21:40:50.354249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.354335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.889 [2024-12-16 21:40:50.354346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:00.889 [2024-12-16 21:40:50.354360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:00.889 [2024-12-16 21:40:50.354367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.889 [2024-12-16 21:40:50.354464] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:00.889 [2024-12-16 21:40:50.354477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:00.889 [2024-12-16 21:40:50.354486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:00.889 [2024-12-16 21:40:50.354512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:00.889 [2024-12-16 21:40:50.354539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:00.889 [2024-12-16 21:40:50.354557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:00.889 [2024-12-16 21:40:50.354565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:00.889 [2024-12-16 21:40:50.354574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:00.889 [2024-12-16 21:40:50.354581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:00.889 [2024-12-16 21:40:50.354588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:00.889 [2024-12-16 21:40:50.354595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:00.889 [2024-12-16 21:40:50.354609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:00.889 [2024-12-16 21:40:50.354654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:00.889 [2024-12-16 21:40:50.354680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:00.889 [2024-12-16 21:40:50.354701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:00.889 [2024-12-16 21:40:50.354722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.889 [2024-12-16 21:40:50.354737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:00.889 [2024-12-16 21:40:50.354746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:00.889 [2024-12-16 21:40:50.354760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:00.889 [2024-12-16 21:40:50.354767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:00.889 [2024-12-16 21:40:50.354774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:00.889 [2024-12-16 21:40:50.354781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:00.889 [2024-12-16 21:40:50.354787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:00.889 [2024-12-16 21:40:50.354797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:00.889 [2024-12-16 21:40:50.354811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:00.889 [2024-12-16 21:40:50.354819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.889 [2024-12-16 21:40:50.354826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:00.889 [2024-12-16 21:40:50.354838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:00.890 [2024-12-16 21:40:50.354846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:00.890 [2024-12-16 21:40:50.354857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.890 [2024-12-16 21:40:50.354865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:00.890 [2024-12-16 21:40:50.354873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:00.890 [2024-12-16 21:40:50.354883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:00.890 [2024-12-16 21:40:50.354891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:00.890 [2024-12-16 21:40:50.354898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:00.890 [2024-12-16 21:40:50.354904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:00.890 [2024-12-16 21:40:50.354913] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:00.890 [2024-12-16 21:40:50.354922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:00.890 [2024-12-16 21:40:50.354939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:00.890 [2024-12-16 21:40:50.354948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:00.890 [2024-12-16 21:40:50.354956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:00.890 [2024-12-16 21:40:50.354963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:00.890 [2024-12-16 21:40:50.354970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:00.890 [2024-12-16 21:40:50.354977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:00.890 [2024-12-16 21:40:50.354984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:00.890 [2024-12-16 21:40:50.354991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:00.890 [2024-12-16 21:40:50.354998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:00.890 [2024-12-16 21:40:50.355006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:00.890 [2024-12-16 21:40:50.355014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:00.890 [2024-12-16 21:40:50.355027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:00.890 [2024-12-16 21:40:50.355037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:00.890 [2024-12-16 21:40:50.355045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:00.890 [2024-12-16 21:40:50.355052] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:00.890 [2024-12-16 21:40:50.355061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:00.890 [2024-12-16 21:40:50.355071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:00.890 [2024-12-16 21:40:50.355080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:00.890 [2024-12-16 21:40:50.355087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:00.890 [2024-12-16 21:40:50.355094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:00.890 [2024-12-16 21:40:50.355102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.355110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:00.890 [2024-12-16 21:40:50.355118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:35:00.890 [2024-12-16 21:40:50.355126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.365508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.365727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:00.890 [2024-12-16 21:40:50.365747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.338 ms 00:35:00.890 [2024-12-16 21:40:50.365755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.365844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.365853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:00.890 [2024-12-16 21:40:50.365863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:35:00.890 [2024-12-16 21:40:50.365881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.389670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.389735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:00.890 [2024-12-16 21:40:50.389754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.728 ms 00:35:00.890 [2024-12-16 21:40:50.389768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.389839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.389855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:00.890 [2024-12-16 21:40:50.389871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:00.890 [2024-12-16 21:40:50.389884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.390038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.390062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:00.890 [2024-12-16 21:40:50.390076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:35:00.890 [2024-12-16 21:40:50.390090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.390276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.390307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:00.890 [2024-12-16 21:40:50.390322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:35:00.890 [2024-12-16 21:40:50.390336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.398464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.398513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:00.890 [2024-12-16 21:40:50.398532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.093 ms 00:35:00.890 [2024-12-16 21:40:50.398541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.398691] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:00.890 [2024-12-16 21:40:50.398707] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:00.890 [2024-12-16 21:40:50.398718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.398727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:00.890 [2024-12-16 21:40:50.398737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:35:00.890 [2024-12-16 21:40:50.398748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.411053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.411094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:00.890 [2024-12-16 21:40:50.411106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:35:00.890 [2024-12-16 21:40:50.411116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.411245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.411263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:00.890 [2024-12-16 21:40:50.411273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:35:00.890 [2024-12-16 21:40:50.411292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.411345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.411359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:00.890 [2024-12-16 21:40:50.411368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:00.890 [2024-12-16 21:40:50.411377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.411729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.411752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:00.890 [2024-12-16 21:40:50.411761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:35:00.890 [2024-12-16 21:40:50.411769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.411785] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:00.890 [2024-12-16 21:40:50.411796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.411807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:00.890 [2024-12-16 21:40:50.411821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:00.890 [2024-12-16 21:40:50.411829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.421792] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:00.890 [2024-12-16 21:40:50.421953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.421964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:00.890 [2024-12-16 21:40:50.421976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.105 ms 00:35:00.890 [2024-12-16 21:40:50.421984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.424566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.890 [2024-12-16 21:40:50.424602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:00.890 [2024-12-16 21:40:50.424613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:35:00.890 [2024-12-16 21:40:50.424621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.890 [2024-12-16 21:40:50.424731] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:00.891 [2024-12-16 21:40:50.425384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.891 [2024-12-16 21:40:50.425411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:00.891 [2024-12-16 21:40:50.425421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:35:00.891 [2024-12-16 21:40:50.425433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.891 [2024-12-16 21:40:50.425461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.891 [2024-12-16 21:40:50.425470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:00.891 [2024-12-16 21:40:50.425479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:00.891 [2024-12-16 21:40:50.425487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.891 [2024-12-16 21:40:50.425525] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:00.891 [2024-12-16 21:40:50.425535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.891 [2024-12-16 21:40:50.425543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:00.891 [2024-12-16 21:40:50.425552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:00.891 [2024-12-16 21:40:50.425561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.891 [2024-12-16 21:40:50.432504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.891 [2024-12-16 21:40:50.432735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:00.891 [2024-12-16 21:40:50.432758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.923 ms 00:35:00.891 [2024-12-16 21:40:50.432767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.891 [2024-12-16 21:40:50.433148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.891 [2024-12-16 21:40:50.433194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:00.891 [2024-12-16 21:40:50.433222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:35:00.891 [2024-12-16 21:40:50.433232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.891 [2024-12-16 21:40:50.434566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 85.289 ms, result 0 00:35:02.278  [2024-12-16T21:40:52.920Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-16T21:40:53.854Z] Copying: 20/1024 [MB] (10 MBps) [2024-12-16T21:40:54.791Z] Copying: 33/1024 [MB] (12 MBps) [2024-12-16T21:40:55.734Z] Copying: 45/1024 [MB] (12 MBps) [2024-12-16T21:40:56.673Z] Copying: 56/1024 [MB] (11 MBps) [2024-12-16T21:40:58.076Z] Copying: 68/1024 [MB] (11 MBps) [2024-12-16T21:40:58.642Z] Copying: 83/1024 [MB] (15 MBps) [2024-12-16T21:41:00.024Z] Copying: 95/1024 [MB] (11 MBps) [2024-12-16T21:41:00.960Z] Copying: 109/1024 [MB] (14 MBps) [2024-12-16T21:41:01.897Z] Copying: 122/1024 [MB] (12 MBps) [2024-12-16T21:41:02.834Z] Copying: 135/1024 [MB] (12 MBps) [2024-12-16T21:41:03.779Z] Copying: 147/1024 [MB] (11 MBps) [2024-12-16T21:41:04.719Z] Copying: 163/1024 [MB] (16 MBps) [2024-12-16T21:41:05.652Z] Copying: 178/1024 [MB] (14 MBps) [2024-12-16T21:41:07.026Z] Copying: 189/1024 [MB] (11 MBps) [2024-12-16T21:41:07.963Z] Copying: 201/1024 [MB] (11 MBps) [2024-12-16T21:41:08.906Z] Copying: 213/1024 [MB] (11 MBps) [2024-12-16T21:41:09.850Z] Copying: 228/1024 [MB] (15 MBps) [2024-12-16T21:41:10.794Z] Copying: 245/1024 [MB] (17 MBps) [2024-12-16T21:41:11.732Z] Copying: 256/1024 [MB] (10 MBps) [2024-12-16T21:41:12.676Z] Copying: 267/1024 [MB] (10 MBps) [2024-12-16T21:41:14.060Z] Copying: 282/1024 [MB] (14 MBps) [2024-12-16T21:41:14.630Z] Copying: 298/1024 [MB] (16 MBps) [2024-12-16T21:41:16.011Z] Copying: 320/1024 [MB] (21 MBps) [2024-12-16T21:41:16.952Z] Copying: 339/1024 [MB] (19 MBps) [2024-12-16T21:41:17.892Z] Copying: 360/1024 [MB] (20 MBps) [2024-12-16T21:41:18.833Z] Copying: 380/1024 [MB] (19 MBps) [2024-12-16T21:41:19.774Z] Copying: 400/1024 [MB] (20 MBps) [2024-12-16T21:41:20.712Z] Copying: 421/1024 [MB] (21 MBps) [2024-12-16T21:41:21.652Z] Copying: 435/1024 [MB] (13 MBps) [2024-12-16T21:41:23.035Z] Copying: 449/1024 [MB] (14 MBps) [2024-12-16T21:41:23.994Z] Copying: 467/1024 [MB] (18 MBps) [2024-12-16T21:41:24.946Z] Copying: 485/1024 [MB] (17 MBps) [2024-12-16T21:41:25.889Z] Copying: 501/1024 [MB] (16 MBps) [2024-12-16T21:41:26.830Z] Copying: 512/1024 [MB] (11 MBps) [2024-12-16T21:41:27.773Z] Copying: 533/1024 [MB] (20 MBps) [2024-12-16T21:41:28.716Z] Copying: 544/1024 [MB] (11 MBps) [2024-12-16T21:41:29.656Z] Copying: 555/1024 [MB] (11 MBps) [2024-12-16T21:41:31.041Z] Copying: 567/1024 [MB] (11 MBps) [2024-12-16T21:41:31.979Z] Copying: 585/1024 [MB] (18 MBps) [2024-12-16T21:41:32.915Z] Copying: 597/1024 [MB] (11 MBps) [2024-12-16T21:41:33.859Z] Copying: 610/1024 [MB] (13 MBps) [2024-12-16T21:41:34.798Z] Copying: 621/1024 [MB] (10 MBps) [2024-12-16T21:41:35.731Z] Copying: 632/1024 [MB] (11 MBps) [2024-12-16T21:41:36.673Z] Copying: 644/1024 [MB] (12 MBps) [2024-12-16T21:41:38.050Z] Copying: 655/1024 [MB] (11 MBps) [2024-12-16T21:41:38.986Z] Copying: 667/1024 [MB] (11 MBps) [2024-12-16T21:41:39.929Z] Copying: 679/1024 [MB] (11 MBps) [2024-12-16T21:41:40.864Z] Copying: 691/1024 [MB] (12 MBps) [2024-12-16T21:41:41.808Z] Copying: 702/1024 [MB] (11 MBps) [2024-12-16T21:41:42.750Z] Copying: 713/1024 [MB] (11 MBps) [2024-12-16T21:41:43.687Z] Copying: 724/1024 [MB] (10 MBps) [2024-12-16T21:41:44.629Z] Copying: 736/1024 [MB] (11 MBps) [2024-12-16T21:41:46.005Z] Copying: 746/1024 [MB] (10 MBps) [2024-12-16T21:41:46.945Z] Copying: 758/1024 [MB] (11 MBps) [2024-12-16T21:41:47.880Z] Copying: 769/1024 [MB] (10 MBps) [2024-12-16T21:41:48.814Z] Copying: 780/1024 [MB] (11 MBps) [2024-12-16T21:41:49.793Z] Copying: 792/1024 [MB] (11 MBps) [2024-12-16T21:41:50.762Z] Copying: 804/1024 [MB] (11 MBps) [2024-12-16T21:41:51.706Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-16T21:41:52.640Z] Copying: 827/1024 [MB] (10 MBps) [2024-12-16T21:41:54.017Z] Copying: 838/1024 [MB] (11 MBps) [2024-12-16T21:41:54.960Z] Copying: 850/1024 [MB] (11 MBps) [2024-12-16T21:41:55.900Z] Copying: 868/1024 [MB] (18 MBps) [2024-12-16T21:41:56.842Z] Copying: 879/1024 [MB] (10 MBps) [2024-12-16T21:41:57.777Z] Copying: 891/1024 [MB] (11 MBps) [2024-12-16T21:41:58.715Z] Copying: 902/1024 [MB] (11 MBps) [2024-12-16T21:41:59.658Z] Copying: 921/1024 [MB] (18 MBps) [2024-12-16T21:42:01.036Z] Copying: 931/1024 [MB] (10 MBps) [2024-12-16T21:42:01.973Z] Copying: 942/1024 [MB] (11 MBps) [2024-12-16T21:42:02.914Z] Copying: 954/1024 [MB] (11 MBps) [2024-12-16T21:42:03.853Z] Copying: 968/1024 [MB] (14 MBps) [2024-12-16T21:42:04.795Z] Copying: 980/1024 [MB] (12 MBps) [2024-12-16T21:42:05.729Z] Copying: 999/1024 [MB] (18 MBps) [2024-12-16T21:42:06.672Z] Copying: 1011/1024 [MB] (11 MBps) [2024-12-16T21:42:06.933Z] Copying: 1022/1024 [MB] (11 MBps) [2024-12-16T21:42:07.196Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-16 21:42:06.983476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.496 [2024-12-16 21:42:06.983548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:17.496 [2024-12-16 21:42:06.983565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:17.496 [2024-12-16 21:42:06.983575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.496 [2024-12-16 21:42:06.983600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:17.496 [2024-12-16 21:42:06.984385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.496 [2024-12-16 21:42:06.984423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:17.496 [2024-12-16 21:42:06.984434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:36:17.496 [2024-12-16 21:42:06.984450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.496 [2024-12-16 21:42:06.984701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.496 [2024-12-16 21:42:06.984714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:17.496 [2024-12-16 21:42:06.984734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:36:17.496 [2024-12-16 21:42:06.984744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.496 [2024-12-16 21:42:06.984775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.496 [2024-12-16 21:42:06.984788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:17.496 [2024-12-16 21:42:06.984797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:17.496 [2024-12-16 21:42:06.984805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.496 [2024-12-16 21:42:06.984866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.496 [2024-12-16 21:42:06.984878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:17.496 [2024-12-16 21:42:06.984887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:36:17.496 [2024-12-16 21:42:06.984896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.496 [2024-12-16 21:42:06.984912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:17.496 [2024-12-16 21:42:06.984924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:17.496 [2024-12-16 21:42:06.984940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.984949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.984957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.984965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.984976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.984984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.984993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:17.496 [2024-12-16 21:42:06.985390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:17.497 [2024-12-16 21:42:06.985794] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:17.497 [2024-12-16 21:42:06.985803] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5 00:36:17.497 [2024-12-16 21:42:06.985812] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:17.497 [2024-12-16 21:42:06.985820] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:36:17.497 [2024-12-16 21:42:06.985829] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:36:17.497 [2024-12-16 21:42:06.985840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:36:17.497 [2024-12-16 21:42:06.985847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:17.497 [2024-12-16 21:42:06.985856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:17.497 [2024-12-16 21:42:06.985868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:17.497 [2024-12-16 21:42:06.985875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:17.497 [2024-12-16 21:42:06.985884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:17.497 [2024-12-16 21:42:06.985893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.497 [2024-12-16 21:42:06.985901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:17.497 [2024-12-16 21:42:06.985909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:36:17.497 [2024-12-16 21:42:06.985916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:06.988900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.497 [2024-12-16 21:42:06.988939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:17.497 [2024-12-16 21:42:06.988958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:36:17.497 [2024-12-16 21:42:06.988968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:06.989095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.497 [2024-12-16 21:42:06.989105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:17.497 [2024-12-16 21:42:06.989114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:36:17.497 [2024-12-16 21:42:06.989121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:06.996957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.497 [2024-12-16 21:42:06.997140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:17.497 [2024-12-16 21:42:06.997202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.497 [2024-12-16 21:42:06.997241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:06.997334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.497 [2024-12-16 21:42:06.997361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:17.497 [2024-12-16 21:42:06.997383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.497 [2024-12-16 21:42:06.997405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:06.997485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.497 [2024-12-16 21:42:06.997522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:17.497 [2024-12-16 21:42:06.997548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.497 [2024-12-16 21:42:06.997616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:06.997712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.497 [2024-12-16 21:42:06.997740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:17.497 [2024-12-16 21:42:06.997765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.497 [2024-12-16 21:42:06.997789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:07.012285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.497 [2024-12-16 21:42:07.012481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:17.497 [2024-12-16 21:42:07.012540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.497 [2024-12-16 21:42:07.012573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:07.024642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.497 [2024-12-16 21:42:07.024833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:17.497 [2024-12-16 21:42:07.024890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.497 [2024-12-16 21:42:07.024913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.497 [2024-12-16 21:42:07.024980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.497 [2024-12-16 21:42:07.025003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:17.497 [2024-12-16 21:42:07.025032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.497 [2024-12-16 21:42:07.025053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.498 [2024-12-16 21:42:07.025102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.498 [2024-12-16 21:42:07.025249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:17.498 [2024-12-16 21:42:07.025281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.498 [2024-12-16 21:42:07.025300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.498 [2024-12-16 21:42:07.025377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.498 [2024-12-16 21:42:07.025403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:17.498 [2024-12-16 21:42:07.025424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.498 [2024-12-16 21:42:07.025714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.498 [2024-12-16 21:42:07.025841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.498 [2024-12-16 21:42:07.025873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:17.498 [2024-12-16 21:42:07.025958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.498 [2024-12-16 21:42:07.025985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.498 [2024-12-16 21:42:07.026047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.498 [2024-12-16 21:42:07.026081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:17.498 [2024-12-16 21:42:07.026102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.498 [2024-12-16 21:42:07.026190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.498 [2024-12-16 21:42:07.026256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.498 [2024-12-16 21:42:07.026282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:17.498 [2024-12-16 21:42:07.026310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.498 [2024-12-16 21:42:07.026331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.498 [2024-12-16 21:42:07.026492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.971 ms, result 0 00:36:17.758 00:36:17.758 00:36:17.758 21:42:07 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:20.298 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 97094 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 97094 ']' 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 97094 00:36:20.298 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (97094) - No such process 00:36:20.298 Process with pid 97094 is not found 00:36:20.298 Remove shared memory files 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 97094 is not found' 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:20.298 21:42:09 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:20.299 21:42:09 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_band_md /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_l2p_l1 /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_l2p_l2 /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_l2p_l2_ctx /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_nvc_md /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_p2l_pool /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_sb /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_sb_shm /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_trim_bitmap /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_trim_log /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_trim_md /dev/hugepages/ftl_e12ac6a9-ad77-48b0-ba92-4adb4c44ffc5_vmap 00:36:20.299 21:42:09 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:20.299 21:42:09 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:20.299 21:42:09 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:20.299 ************************************ 00:36:20.299 END TEST ftl_restore_fast 00:36:20.299 ************************************ 00:36:20.299 00:36:20.299 real 5m28.796s 00:36:20.299 user 5m18.304s 00:36:20.299 sys 0m10.181s 00:36:20.299 21:42:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.299 21:42:09 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:20.299 Process with pid 87832 is not found 00:36:20.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.299 21:42:09 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:20.299 21:42:09 ftl -- ftl/ftl.sh@14 -- # killprocess 87832 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 87832 ']' 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@958 -- # kill -0 87832 00:36:20.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87832) - No such process 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87832 is not found' 00:36:20.299 21:42:09 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:20.299 21:42:09 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=100443 00:36:20.299 21:42:09 ftl -- ftl/ftl.sh@20 -- # waitforlisten 100443 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@835 -- # '[' -z 100443 ']' 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.299 21:42:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:20.299 21:42:09 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:20.299 [2024-12-16 21:42:09.667466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:20.299 [2024-12-16 21:42:09.667587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100443 ] 00:36:20.299 [2024-12-16 21:42:09.810360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.299 [2024-12-16 21:42:09.836219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.870 21:42:10 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.870 21:42:10 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:20.870 21:42:10 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:21.130 nvme0n1 00:36:21.130 21:42:10 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:21.130 21:42:10 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:21.130 21:42:10 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:21.391 21:42:11 ftl -- ftl/common.sh@28 -- # stores=54762fea-a5ec-4da6-a34b-5a18ba199f7e 00:36:21.391 21:42:11 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:21.391 21:42:11 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54762fea-a5ec-4da6-a34b-5a18ba199f7e 00:36:21.651 21:42:11 ftl -- ftl/ftl.sh@23 -- # killprocess 100443 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@954 -- # '[' -z 100443 ']' 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@958 -- # kill -0 100443 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@959 -- # uname 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100443 00:36:21.651 killing process with pid 100443 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100443' 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@973 -- # kill 100443 00:36:21.651 21:42:11 ftl -- common/autotest_common.sh@978 -- # wait 100443 00:36:21.913 21:42:11 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:22.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:22.174 Waiting for block devices as requested 00:36:22.436 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:22.436 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:22.436 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:22.697 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:28.049 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:28.049 21:42:17 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:28.049 21:42:17 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:28.049 Remove shared memory files 00:36:28.049 21:42:17 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:28.049 21:42:17 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:28.049 21:42:17 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:28.049 21:42:17 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:28.049 21:42:17 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:28.049 00:36:28.049 real 19m14.802s 00:36:28.049 user 21m12.751s 00:36:28.049 sys 1m18.177s 00:36:28.049 21:42:17 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:28.049 21:42:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:28.049 ************************************ 00:36:28.049 END TEST ftl 00:36:28.049 ************************************ 00:36:28.049 21:42:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:28.049 21:42:17 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:28.049 21:42:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:28.049 21:42:17 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:28.049 21:42:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:28.049 21:42:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:28.049 21:42:17 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:28.049 21:42:17 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:28.049 21:42:17 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:28.049 21:42:17 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:28.049 21:42:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.049 21:42:17 -- common/autotest_common.sh@10 -- # set +x 00:36:28.049 21:42:17 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:28.049 21:42:17 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:28.049 21:42:17 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:28.049 21:42:17 -- common/autotest_common.sh@10 -- # set +x 00:36:29.433 INFO: APP EXITING 00:36:29.433 INFO: killing all VMs 00:36:29.433 INFO: killing vhost app 00:36:29.433 INFO: EXIT DONE 00:36:29.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:29.948 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:29.948 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:29.948 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:29.948 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:30.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:30.772 Cleaning 00:36:30.772 Removing: /var/run/dpdk/spdk0/config 00:36:30.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:30.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:30.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:30.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:30.772 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:30.772 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:30.772 Removing: /var/run/dpdk/spdk0 00:36:30.772 Removing: /var/run/dpdk/spdk_pid100443 00:36:30.772 Removing: /var/run/dpdk/spdk_pid70761 00:36:30.772 Removing: /var/run/dpdk/spdk_pid70919 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71121 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71203 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71231 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71343 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71355 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71538 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71611 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71691 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71791 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71870 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71905 00:36:30.772 Removing: /var/run/dpdk/spdk_pid71942 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72007 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72107 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72527 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72569 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72621 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72637 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72694 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72700 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72758 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72774 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72816 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72834 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72876 00:36:30.772 Removing: /var/run/dpdk/spdk_pid72894 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73021 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73052 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73141 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73302 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73374 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73401 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73828 00:36:30.772 Removing: /var/run/dpdk/spdk_pid73915 00:36:30.772 Removing: /var/run/dpdk/spdk_pid74016 00:36:30.772 Removing: /var/run/dpdk/spdk_pid74053 00:36:30.772 Removing: /var/run/dpdk/spdk_pid74078 00:36:30.772 Removing: /var/run/dpdk/spdk_pid74157 00:36:30.772 Removing: /var/run/dpdk/spdk_pid74763 00:36:30.772 Removing: /var/run/dpdk/spdk_pid74794 00:36:30.772 Removing: /var/run/dpdk/spdk_pid75258 00:36:30.772 Removing: /var/run/dpdk/spdk_pid75351 00:36:30.772 Removing: /var/run/dpdk/spdk_pid75449 00:36:30.772 Removing: /var/run/dpdk/spdk_pid75486 00:36:30.772 Removing: /var/run/dpdk/spdk_pid75511 00:36:30.772 Removing: /var/run/dpdk/spdk_pid75537 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77358 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77473 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77477 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77495 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77541 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77545 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77557 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77596 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77600 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77612 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77651 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77655 00:36:30.772 Removing: /var/run/dpdk/spdk_pid77667 00:36:30.772 Removing: /var/run/dpdk/spdk_pid79047 00:36:30.772 Removing: /var/run/dpdk/spdk_pid79134 00:36:30.772 Removing: /var/run/dpdk/spdk_pid80532 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82284 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82336 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82411 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82504 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82589 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82676 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82738 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82803 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82902 00:36:30.772 Removing: /var/run/dpdk/spdk_pid82983 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83073 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83130 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83200 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83293 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83378 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83464 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83516 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83586 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83684 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83765 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83855 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83907 00:36:30.772 Removing: /var/run/dpdk/spdk_pid83976 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84039 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84108 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84200 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84285 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84369 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84432 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84495 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84564 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84631 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84729 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84808 00:36:30.772 Removing: /var/run/dpdk/spdk_pid84949 00:36:30.772 Removing: /var/run/dpdk/spdk_pid85221 00:36:30.772 Removing: /var/run/dpdk/spdk_pid85252 00:36:30.772 Removing: /var/run/dpdk/spdk_pid85694 00:36:30.772 Removing: /var/run/dpdk/spdk_pid85871 00:36:30.772 Removing: /var/run/dpdk/spdk_pid85962 00:36:30.772 Removing: /var/run/dpdk/spdk_pid86068 00:36:30.772 Removing: /var/run/dpdk/spdk_pid86110 00:36:30.772 Removing: /var/run/dpdk/spdk_pid86135 00:36:30.772 Removing: /var/run/dpdk/spdk_pid86431 00:36:30.772 Removing: /var/run/dpdk/spdk_pid86469 00:36:30.772 Removing: /var/run/dpdk/spdk_pid86525 00:36:30.772 Removing: /var/run/dpdk/spdk_pid86891 00:36:30.772 Removing: /var/run/dpdk/spdk_pid87029 00:36:30.772 Removing: /var/run/dpdk/spdk_pid87832 00:36:30.772 Removing: /var/run/dpdk/spdk_pid87953 00:36:30.772 Removing: /var/run/dpdk/spdk_pid88111 00:36:31.031 Removing: /var/run/dpdk/spdk_pid88208 00:36:31.031 Removing: /var/run/dpdk/spdk_pid88522 00:36:31.031 Removing: /var/run/dpdk/spdk_pid88773 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89116 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89271 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89456 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89492 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89653 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89677 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89713 00:36:31.031 Removing: /var/run/dpdk/spdk_pid89949 00:36:31.031 Removing: /var/run/dpdk/spdk_pid90170 00:36:31.031 Removing: /var/run/dpdk/spdk_pid90826 00:36:31.031 Removing: /var/run/dpdk/spdk_pid91619 00:36:31.031 Removing: /var/run/dpdk/spdk_pid92299 00:36:31.031 Removing: /var/run/dpdk/spdk_pid93174 00:36:31.031 Removing: /var/run/dpdk/spdk_pid93310 00:36:31.031 Removing: /var/run/dpdk/spdk_pid93399 00:36:31.031 Removing: /var/run/dpdk/spdk_pid93809 00:36:31.031 Removing: /var/run/dpdk/spdk_pid93856 00:36:31.031 Removing: /var/run/dpdk/spdk_pid94650 00:36:31.031 Removing: /var/run/dpdk/spdk_pid95257 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96140 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96261 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96294 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96353 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96403 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96456 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96633 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96712 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96769 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96847 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96876 00:36:31.031 Removing: /var/run/dpdk/spdk_pid96944 00:36:31.031 Removing: /var/run/dpdk/spdk_pid97094 00:36:31.031 Removing: /var/run/dpdk/spdk_pid97294 00:36:31.031 Removing: /var/run/dpdk/spdk_pid97950 00:36:31.031 Removing: /var/run/dpdk/spdk_pid98763 00:36:31.031 Removing: /var/run/dpdk/spdk_pid99633 00:36:31.031 Clean 00:36:31.031 21:42:20 -- common/autotest_common.sh@1453 -- # return 0 00:36:31.031 21:42:20 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:31.031 21:42:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:31.031 21:42:20 -- common/autotest_common.sh@10 -- # set +x 00:36:31.031 21:42:20 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:31.031 21:42:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:31.031 21:42:20 -- common/autotest_common.sh@10 -- # set +x 00:36:31.031 21:42:20 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:31.031 21:42:20 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:31.031 21:42:20 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:31.031 21:42:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:31.031 21:42:20 -- spdk/autotest.sh@398 -- # hostname 00:36:31.031 21:42:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:31.290 geninfo: WARNING: invalid characters removed from testname! 00:36:57.878 21:42:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:59.261 21:42:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:01.811 21:42:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:04.359 21:42:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:06.262 21:42:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:08.164 21:42:57 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:10.068 21:42:59 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:10.069 21:42:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:10.069 21:42:59 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:10.069 21:42:59 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:10.069 21:42:59 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:10.069 21:42:59 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:10.328 + [[ -n 5762 ]] 00:37:10.328 + sudo kill 5762 00:37:10.337 [Pipeline] } 00:37:10.353 [Pipeline] // timeout 00:37:10.358 [Pipeline] } 00:37:10.372 [Pipeline] // stage 00:37:10.377 [Pipeline] } 00:37:10.392 [Pipeline] // catchError 00:37:10.401 [Pipeline] stage 00:37:10.403 [Pipeline] { (Stop VM) 00:37:10.415 [Pipeline] sh 00:37:10.698 + vagrant halt 00:37:13.240 ==> default: Halting domain... 00:37:19.843 [Pipeline] sh 00:37:20.126 + vagrant destroy -f 00:37:22.674 ==> default: Removing domain... 00:37:23.630 [Pipeline] sh 00:37:23.915 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:23.926 [Pipeline] } 00:37:23.940 [Pipeline] // stage 00:37:23.946 [Pipeline] } 00:37:23.960 [Pipeline] // dir 00:37:23.965 [Pipeline] } 00:37:23.980 [Pipeline] // wrap 00:37:23.986 [Pipeline] } 00:37:23.998 [Pipeline] // catchError 00:37:24.007 [Pipeline] stage 00:37:24.009 [Pipeline] { (Epilogue) 00:37:24.022 [Pipeline] sh 00:37:24.307 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:29.597 [Pipeline] catchError 00:37:29.599 [Pipeline] { 00:37:29.612 [Pipeline] sh 00:37:29.897 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:29.897 Artifacts sizes are good 00:37:29.907 [Pipeline] } 00:37:29.921 [Pipeline] // catchError 00:37:29.932 [Pipeline] archiveArtifacts 00:37:29.939 Archiving artifacts 00:37:30.045 [Pipeline] cleanWs 00:37:30.057 [WS-CLEANUP] Deleting project workspace... 00:37:30.057 [WS-CLEANUP] Deferred wipeout is used... 00:37:30.064 [WS-CLEANUP] done 00:37:30.066 [Pipeline] } 00:37:30.082 [Pipeline] // stage 00:37:30.087 [Pipeline] } 00:37:30.101 [Pipeline] // node 00:37:30.107 [Pipeline] End of Pipeline 00:37:30.147 Finished: SUCCESS