00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2456 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3721 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.285 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.285 The recommended git tool is: git 00:00:00.285 using credential 00000000-0000-0000-0000-000000000002 00:00:00.287 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.330 Fetching changes from the remote Git repository 00:00:00.331 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.378 Using shallow fetch with depth 1 00:00:00.378 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.378 > git --version # timeout=10 00:00:00.414 > git --version # 'git version 2.39.2' 00:00:00.414 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.442 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.443 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.225 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.235 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.245 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.245 > git config core.sparsecheckout # timeout=10 00:00:07.257 > git read-tree -mu HEAD # timeout=10 00:00:07.272 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.289 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.289 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.372 [Pipeline] Start of Pipeline 00:00:07.384 [Pipeline] library 00:00:07.386 Loading library shm_lib@master 00:00:07.386 Library shm_lib@master is cached. Copying from home. 00:00:07.401 [Pipeline] node 00:00:07.412 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.414 [Pipeline] { 00:00:07.421 [Pipeline] catchError 00:00:07.422 [Pipeline] { 00:00:07.431 [Pipeline] wrap 00:00:07.437 [Pipeline] { 00:00:07.442 [Pipeline] stage 00:00:07.443 [Pipeline] { (Prologue) 00:00:07.457 [Pipeline] echo 00:00:07.459 Node: VM-host-SM38 00:00:07.463 [Pipeline] cleanWs 00:00:07.473 [WS-CLEANUP] Deleting project workspace... 00:00:07.473 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.480 [WS-CLEANUP] done 00:00:07.682 [Pipeline] setCustomBuildProperty 00:00:07.745 [Pipeline] httpRequest 00:00:08.082 [Pipeline] echo 00:00:08.083 Sorcerer 10.211.164.20 is alive 00:00:08.089 [Pipeline] retry 00:00:08.090 [Pipeline] { 00:00:08.099 [Pipeline] httpRequest 00:00:08.104 HttpMethod: GET 00:00:08.105 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.106 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.107 Response Code: HTTP/1.1 200 OK 00:00:08.108 Success: Status code 200 is in the accepted range: 200,404 00:00:08.108 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.806 [Pipeline] } 00:00:09.824 [Pipeline] // retry 00:00:09.832 [Pipeline] sh 00:00:10.225 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.242 [Pipeline] httpRequest 00:00:10.635 [Pipeline] echo 00:00:10.637 Sorcerer 10.211.164.20 is alive 00:00:10.645 [Pipeline] retry 00:00:10.647 [Pipeline] { 00:00:10.660 [Pipeline] httpRequest 00:00:10.665 HttpMethod: GET 00:00:10.666 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.666 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.687 Response Code: HTTP/1.1 200 OK 00:00:10.688 Success: Status code 200 is in the accepted range: 200,404 00:00:10.688 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:27.928 [Pipeline] } 00:01:27.943 [Pipeline] // retry 00:01:27.950 [Pipeline] sh 00:01:28.230 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:31.542 [Pipeline] sh 00:01:31.824 + git -C spdk log --oneline -n5 00:01:31.824 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:31.824 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:31.824 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:31.824 66289a6db build: use VERSION file for storing version 00:01:31.824 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:31.846 [Pipeline] withCredentials 00:01:31.857 > git --version # timeout=10 00:01:31.870 > git --version # 'git version 2.39.2' 00:01:31.887 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:31.889 [Pipeline] { 00:01:31.898 [Pipeline] retry 00:01:31.900 [Pipeline] { 00:01:31.915 [Pipeline] sh 00:01:32.197 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:32.210 [Pipeline] } 00:01:32.227 [Pipeline] // retry 00:01:32.232 [Pipeline] } 00:01:32.247 [Pipeline] // withCredentials 00:01:32.257 [Pipeline] httpRequest 00:01:32.636 [Pipeline] echo 00:01:32.638 Sorcerer 10.211.164.20 is alive 00:01:32.648 [Pipeline] retry 00:01:32.650 [Pipeline] { 00:01:32.664 [Pipeline] httpRequest 00:01:32.670 HttpMethod: GET 00:01:32.671 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.671 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.673 Response Code: HTTP/1.1 200 OK 00:01:32.674 Success: Status code 200 is in the accepted range: 200,404 00:01:32.674 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.708 [Pipeline] } 00:01:38.725 [Pipeline] // retry 00:01:38.733 [Pipeline] sh 00:01:39.017 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:40.418 [Pipeline] sh 00:01:40.717 + git -C dpdk log --oneline -n5 00:01:40.717 caf0f5d395 version: 22.11.4 00:01:40.717 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.717 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.717 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.717 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.749 [Pipeline] writeFile 00:01:40.764 [Pipeline] sh 00:01:41.051 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:41.063 [Pipeline] sh 00:01:41.347 + cat autorun-spdk.conf 00:01:41.347 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.347 SPDK_TEST_NVME=1 00:01:41.347 SPDK_TEST_FTL=1 00:01:41.347 SPDK_TEST_ISAL=1 00:01:41.347 SPDK_RUN_ASAN=1 00:01:41.347 SPDK_RUN_UBSAN=1 00:01:41.347 SPDK_TEST_XNVME=1 00:01:41.347 SPDK_TEST_NVME_FDP=1 00:01:41.347 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.347 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.347 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.356 RUN_NIGHTLY=1 00:01:41.358 [Pipeline] } 00:01:41.372 [Pipeline] // stage 00:01:41.386 [Pipeline] stage 00:01:41.388 [Pipeline] { (Run VM) 00:01:41.401 [Pipeline] sh 00:01:41.684 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:41.685 + echo 'Start stage prepare_nvme.sh' 00:01:41.685 Start stage prepare_nvme.sh 00:01:41.685 + [[ -n 6 ]] 00:01:41.685 + disk_prefix=ex6 00:01:41.685 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:41.685 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:41.685 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:41.685 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.685 ++ SPDK_TEST_NVME=1 00:01:41.685 ++ SPDK_TEST_FTL=1 00:01:41.685 ++ SPDK_TEST_ISAL=1 00:01:41.685 ++ SPDK_RUN_ASAN=1 00:01:41.685 ++ SPDK_RUN_UBSAN=1 00:01:41.685 ++ SPDK_TEST_XNVME=1 00:01:41.685 ++ SPDK_TEST_NVME_FDP=1 00:01:41.685 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.685 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.685 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.685 ++ RUN_NIGHTLY=1 00:01:41.685 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:41.685 + nvme_files=() 00:01:41.685 + declare -A nvme_files 00:01:41.685 + backend_dir=/var/lib/libvirt/images/backends 00:01:41.685 + nvme_files['nvme.img']=5G 00:01:41.685 + nvme_files['nvme-cmb.img']=5G 00:01:41.685 + nvme_files['nvme-multi0.img']=4G 00:01:41.685 + nvme_files['nvme-multi1.img']=4G 00:01:41.685 + nvme_files['nvme-multi2.img']=4G 00:01:41.685 + nvme_files['nvme-openstack.img']=8G 00:01:41.685 + nvme_files['nvme-zns.img']=5G 00:01:41.685 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:41.685 + (( SPDK_TEST_FTL == 1 )) 00:01:41.685 + nvme_files["nvme-ftl.img"]=6G 00:01:41.685 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:41.685 + nvme_files["nvme-fdp.img"]=1G 00:01:41.685 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:41.685 + for nvme in "${!nvme_files[@]}" 00:01:41.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:41.685 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.685 + for nvme in "${!nvme_files[@]}" 00:01:41.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:41.685 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:41.685 + for nvme in "${!nvme_files[@]}" 00:01:41.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:41.685 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.685 + for nvme in "${!nvme_files[@]}" 00:01:41.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:41.685 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:41.685 + for nvme in "${!nvme_files[@]}" 00:01:41.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:41.685 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.685 + for nvme in "${!nvme_files[@]}" 00:01:41.685 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:41.946 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.946 + for nvme in "${!nvme_files[@]}" 00:01:41.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:41.946 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.946 + for nvme in "${!nvme_files[@]}" 00:01:41.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:41.946 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:41.946 + for nvme in "${!nvme_files[@]}" 00:01:41.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:41.946 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.946 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:41.946 + echo 'End stage prepare_nvme.sh' 00:01:41.946 End stage prepare_nvme.sh 00:01:41.959 [Pipeline] sh 00:01:42.245 + DISTRO=fedora39 00:01:42.245 + CPUS=10 00:01:42.245 + RAM=12288 00:01:42.245 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:42.245 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:42.245 00:01:42.245 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:42.245 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:42.245 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:42.245 HELP=0 00:01:42.245 DRY_RUN=0 00:01:42.245 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:42.245 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:42.245 NVME_AUTO_CREATE=0 00:01:42.245 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:42.245 NVME_CMB=,,,, 00:01:42.245 NVME_PMR=,,,, 00:01:42.245 NVME_ZNS=,,,, 00:01:42.245 NVME_MS=true,,,, 00:01:42.245 NVME_FDP=,,,on, 00:01:42.245 SPDK_VAGRANT_DISTRO=fedora39 00:01:42.245 SPDK_VAGRANT_VMCPU=10 00:01:42.245 SPDK_VAGRANT_VMRAM=12288 00:01:42.245 SPDK_VAGRANT_PROVIDER=libvirt 00:01:42.245 SPDK_VAGRANT_HTTP_PROXY= 00:01:42.245 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:42.245 SPDK_OPENSTACK_NETWORK=0 00:01:42.245 VAGRANT_PACKAGE_BOX=0 00:01:42.245 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:42.245 FORCE_DISTRO=true 00:01:42.245 VAGRANT_BOX_VERSION= 00:01:42.245 EXTRA_VAGRANTFILES= 00:01:42.245 NIC_MODEL=e1000 00:01:42.245 00:01:42.245 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:42.245 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:44.797 Bringing machine 'default' up with 'libvirt' provider... 00:01:45.367 ==> default: Creating image (snapshot of base box volume). 00:01:45.629 ==> default: Creating domain with the following settings... 00:01:45.629 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734112579_54fd38f42540b34e5326 00:01:45.629 ==> default: -- Domain type: kvm 00:01:45.629 ==> default: -- Cpus: 10 00:01:45.629 ==> default: -- Feature: acpi 00:01:45.629 ==> default: -- Feature: apic 00:01:45.629 ==> default: -- Feature: pae 00:01:45.629 ==> default: -- Memory: 12288M 00:01:45.629 ==> default: -- Memory Backing: hugepages: 00:01:45.629 ==> default: -- Management MAC: 00:01:45.629 ==> default: -- Loader: 00:01:45.629 ==> default: -- Nvram: 00:01:45.629 ==> default: -- Base box: spdk/fedora39 00:01:45.629 ==> default: -- Storage pool: default 00:01:45.629 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734112579_54fd38f42540b34e5326.img (20G) 00:01:45.629 ==> default: -- Volume Cache: default 00:01:45.629 ==> default: -- Kernel: 00:01:45.629 ==> default: -- Initrd: 00:01:45.629 ==> default: -- Graphics Type: vnc 00:01:45.629 ==> default: -- Graphics Port: -1 00:01:45.629 ==> default: -- Graphics IP: 127.0.0.1 00:01:45.629 ==> default: -- Graphics Password: Not defined 00:01:45.629 ==> default: -- Video Type: cirrus 00:01:45.629 ==> default: -- Video VRAM: 9216 00:01:45.629 ==> default: -- Sound Type: 00:01:45.629 ==> default: -- Keymap: en-us 00:01:45.629 ==> default: -- TPM Path: 00:01:45.629 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:45.629 ==> default: -- Command line args: 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:45.630 ==> default: -> value=-drive, 00:01:45.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:45.630 ==> default: -> value=-drive, 00:01:45.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:45.630 ==> default: -> value=-drive, 00:01:45.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.630 ==> default: -> value=-drive, 00:01:45.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.630 ==> default: -> value=-drive, 00:01:45.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:45.630 ==> default: -> value=-drive, 00:01:45.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:45.630 ==> default: -> value=-device, 00:01:45.630 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.891 ==> default: Creating shared folders metadata... 00:01:45.891 ==> default: Starting domain. 00:01:47.811 ==> default: Waiting for domain to get an IP address... 00:02:02.694 ==> default: Waiting for SSH to become available... 00:02:02.694 ==> default: Configuring and enabling network interfaces... 00:02:06.898 default: SSH address: 192.168.121.248:22 00:02:06.898 default: SSH username: vagrant 00:02:06.898 default: SSH auth method: private key 00:02:08.811 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.010 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:23.604 ==> default: Mounting SSHFS shared folder... 00:02:25.518 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.518 ==> default: Checking Mount.. 00:02:26.458 ==> default: Folder Successfully Mounted! 00:02:26.458 00:02:26.458 SUCCESS! 00:02:26.458 00:02:26.458 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:26.458 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:26.458 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:26.458 00:02:26.468 [Pipeline] } 00:02:26.483 [Pipeline] // stage 00:02:26.492 [Pipeline] dir 00:02:26.493 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:26.494 [Pipeline] { 00:02:26.507 [Pipeline] catchError 00:02:26.509 [Pipeline] { 00:02:26.521 [Pipeline] sh 00:02:26.809 + vagrant ssh-config --host vagrant 00:02:26.809 + sed -ne '/^Host/,$p' 00:02:26.809 + tee ssh_conf 00:02:30.110 Host vagrant 00:02:30.110 HostName 192.168.121.248 00:02:30.110 User vagrant 00:02:30.110 Port 22 00:02:30.110 UserKnownHostsFile /dev/null 00:02:30.110 StrictHostKeyChecking no 00:02:30.110 PasswordAuthentication no 00:02:30.110 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:30.110 IdentitiesOnly yes 00:02:30.110 LogLevel FATAL 00:02:30.110 ForwardAgent yes 00:02:30.110 ForwardX11 yes 00:02:30.110 00:02:30.122 [Pipeline] withEnv 00:02:30.123 [Pipeline] { 00:02:30.132 [Pipeline] sh 00:02:30.415 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:30.415 source /etc/os-release 00:02:30.415 [[ -e /image.version ]] && img=$(< /image.version) 00:02:30.415 # Minimal, systemd-like check. 00:02:30.415 if [[ -e /.dockerenv ]]; then 00:02:30.415 # Clear garbage from the node'\''s name: 00:02:30.415 # agt-er_autotest_547-896 -> autotest_547-896 00:02:30.415 # $HOSTNAME is the actual container id 00:02:30.415 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:30.415 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:30.415 # We can assume this is a mount from a host where container is running, 00:02:30.415 # so fetch its hostname to easily identify the target swarm worker. 00:02:30.415 container="$(< /etc/hostname) ($agent)" 00:02:30.415 else 00:02:30.415 # Fallback 00:02:30.415 container=$agent 00:02:30.415 fi 00:02:30.415 fi 00:02:30.415 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:30.415 ' 00:02:30.687 [Pipeline] } 00:02:30.703 [Pipeline] // withEnv 00:02:30.710 [Pipeline] setCustomBuildProperty 00:02:30.723 [Pipeline] stage 00:02:30.725 [Pipeline] { (Tests) 00:02:30.739 [Pipeline] sh 00:02:31.022 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:31.323 [Pipeline] sh 00:02:31.605 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:31.881 [Pipeline] timeout 00:02:31.881 Timeout set to expire in 50 min 00:02:31.883 [Pipeline] { 00:02:31.896 [Pipeline] sh 00:02:32.180 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:32.751 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:32.764 [Pipeline] sh 00:02:33.047 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:33.323 [Pipeline] sh 00:02:33.606 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:33.883 [Pipeline] sh 00:02:34.277 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:34.277 ++ readlink -f spdk_repo 00:02:34.278 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:34.278 + [[ -n /home/vagrant/spdk_repo ]] 00:02:34.278 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:34.278 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:34.278 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:34.278 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:34.278 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:34.278 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:34.278 + cd /home/vagrant/spdk_repo 00:02:34.278 + source /etc/os-release 00:02:34.278 ++ NAME='Fedora Linux' 00:02:34.278 ++ VERSION='39 (Cloud Edition)' 00:02:34.278 ++ ID=fedora 00:02:34.278 ++ VERSION_ID=39 00:02:34.278 ++ VERSION_CODENAME= 00:02:34.278 ++ PLATFORM_ID=platform:f39 00:02:34.278 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:34.278 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.278 ++ LOGO=fedora-logo-icon 00:02:34.278 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:34.278 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.278 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:34.278 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.278 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.278 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.278 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:34.278 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.278 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:34.278 ++ SUPPORT_END=2024-11-12 00:02:34.278 ++ VARIANT='Cloud Edition' 00:02:34.278 ++ VARIANT_ID=cloud 00:02:34.278 + uname -a 00:02:34.278 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:34.278 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:34.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:34.796 Hugepages 00:02:34.796 node hugesize free / total 00:02:34.796 node0 1048576kB 0 / 0 00:02:35.072 node0 2048kB 0 / 0 00:02:35.072 00:02:35.072 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.072 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:35.072 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:35.072 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:35.072 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:35.072 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:35.072 + rm -f /tmp/spdk-ld-path 00:02:35.072 + source autorun-spdk.conf 00:02:35.072 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.072 ++ SPDK_TEST_NVME=1 00:02:35.072 ++ SPDK_TEST_FTL=1 00:02:35.072 ++ SPDK_TEST_ISAL=1 00:02:35.072 ++ SPDK_RUN_ASAN=1 00:02:35.072 ++ SPDK_RUN_UBSAN=1 00:02:35.072 ++ SPDK_TEST_XNVME=1 00:02:35.072 ++ SPDK_TEST_NVME_FDP=1 00:02:35.072 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.072 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.072 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.072 ++ RUN_NIGHTLY=1 00:02:35.072 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.072 + [[ -n '' ]] 00:02:35.072 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.072 + for M in /var/spdk/build-*-manifest.txt 00:02:35.072 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.072 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.072 + for M in /var/spdk/build-*-manifest.txt 00:02:35.072 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.072 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.072 + for M in /var/spdk/build-*-manifest.txt 00:02:35.072 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.072 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.072 ++ uname 00:02:35.072 + [[ Linux == \L\i\n\u\x ]] 00:02:35.072 + sudo dmesg -T 00:02:35.072 + sudo dmesg --clear 00:02:35.072 + dmesg_pid=5775 00:02:35.072 + [[ Fedora Linux == FreeBSD ]] 00:02:35.072 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.072 + sudo dmesg -Tw 00:02:35.072 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.072 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.072 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.072 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.072 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.072 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.072 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.072 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.072 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.072 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.072 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.072 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.072 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.072 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.072 17:57:09 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.072 17:57:09 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:35.072 17:57:09 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.073 17:57:09 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.073 17:57:09 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.073 17:57:09 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:35.073 17:57:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:35.073 17:57:09 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.342 17:57:09 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:35.342 17:57:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.342 17:57:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.342 17:57:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.342 17:57:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.342 17:57:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.342 17:57:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.342 17:57:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.342 17:57:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.342 17:57:09 -- paths/export.sh@5 -- $ export PATH 00:02:35.342 17:57:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.342 17:57:09 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.342 17:57:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:35.342 17:57:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734112629.XXXXXX 00:02:35.342 17:57:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734112629.Rf0ZIr 00:02:35.342 17:57:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:35.342 17:57:09 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:35.342 17:57:09 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.342 17:57:09 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:35.342 17:57:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.342 17:57:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.342 17:57:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:35.342 17:57:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:35.342 17:57:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.342 17:57:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:35.342 17:57:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:35.342 17:57:09 -- pm/common@17 -- $ local monitor 00:02:35.342 17:57:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.342 17:57:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.342 17:57:09 -- pm/common@25 -- $ sleep 1 00:02:35.342 17:57:09 -- pm/common@21 -- $ date +%s 00:02:35.342 17:57:09 -- pm/common@21 -- $ date +%s 00:02:35.342 17:57:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734112629 00:02:35.342 17:57:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734112629 00:02:35.342 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734112629_collect-cpu-load.pm.log 00:02:35.342 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734112629_collect-vmstat.pm.log 00:02:36.276 17:57:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:36.276 17:57:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.276 17:57:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.276 17:57:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.276 17:57:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.276 Fri Dec 13 05:57:10 PM UTC 2024 00:02:36.276 17:57:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.276 v25.01-rc1-2-ge01cb43b8 00:02:36.276 17:57:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:36.276 17:57:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:36.276 17:57:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.276 17:57:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.276 17:57:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.276 ************************************ 00:02:36.276 START TEST asan 00:02:36.276 ************************************ 00:02:36.276 using asan 00:02:36.276 17:57:10 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:36.276 00:02:36.276 real 0m0.000s 00:02:36.276 user 0m0.000s 00:02:36.276 sys 0m0.000s 00:02:36.276 17:57:10 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.276 17:57:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.276 ************************************ 00:02:36.276 END TEST asan 00:02:36.276 ************************************ 00:02:36.276 17:57:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.276 17:57:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.276 17:57:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.276 17:57:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.276 17:57:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.276 ************************************ 00:02:36.276 START TEST ubsan 00:02:36.276 ************************************ 00:02:36.276 using ubsan 00:02:36.276 17:57:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:36.276 00:02:36.276 real 0m0.000s 00:02:36.276 user 0m0.000s 00:02:36.276 sys 0m0.000s 00:02:36.276 17:57:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.276 ************************************ 00:02:36.276 END TEST ubsan 00:02:36.276 ************************************ 00:02:36.276 17:57:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.276 17:57:10 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:36.276 17:57:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:36.276 17:57:10 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:36.276 17:57:10 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:36.276 17:57:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.276 17:57:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.276 ************************************ 00:02:36.276 START TEST build_native_dpdk 00:02:36.276 ************************************ 00:02:36.276 17:57:10 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:36.276 caf0f5d395 version: 22.11.4 00:02:36.276 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:36.276 dc9c799c7d vhost: fix missing spinlock unlock 00:02:36.276 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:36.276 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:36.276 17:57:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:36.277 patching file config/rte_config.h 00:02:36.277 Hunk #1 succeeded at 60 (offset 1 line). 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:36.277 patching file lib/pcapng/rte_pcapng.c 00:02:36.277 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.277 17:57:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:36.277 17:57:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:40.458 The Meson build system 00:02:40.458 Version: 1.5.0 00:02:40.458 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:40.458 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:40.458 Build type: native build 00:02:40.458 Program cat found: YES (/usr/bin/cat) 00:02:40.458 Project name: DPDK 00:02:40.458 Project version: 22.11.4 00:02:40.458 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.458 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:40.458 Host machine cpu family: x86_64 00:02:40.458 Host machine cpu: x86_64 00:02:40.458 Message: ## Building in Developer Mode ## 00:02:40.458 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.458 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:40.458 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.458 Program objdump found: YES (/usr/bin/objdump) 00:02:40.458 Program python3 found: YES (/usr/bin/python3) 00:02:40.458 Program cat found: YES (/usr/bin/cat) 00:02:40.458 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:40.458 Checking for size of "void *" : 8 00:02:40.458 Checking for size of "void *" : 8 (cached) 00:02:40.458 Library m found: YES 00:02:40.458 Library numa found: YES 00:02:40.458 Has header "numaif.h" : YES 00:02:40.458 Library fdt found: NO 00:02:40.458 Library execinfo found: NO 00:02:40.458 Has header "execinfo.h" : YES 00:02:40.458 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.458 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.458 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.458 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.458 Run-time dependency openssl found: YES 3.1.1 00:02:40.458 Run-time dependency libpcap found: YES 1.10.4 00:02:40.458 Has header "pcap.h" with dependency libpcap: YES 00:02:40.458 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.458 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.458 Compiler for C supports arguments -Wformat: YES 00:02:40.458 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.458 Compiler for C supports arguments -Wformat-security: NO 00:02:40.458 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.458 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.458 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.458 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.458 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.458 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.458 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.458 Compiler for C supports arguments -Wundef: YES 00:02:40.458 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.458 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.458 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.458 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.458 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.458 Compiler for C supports arguments -mavx512f: YES 00:02:40.458 Checking if "AVX512 checking" compiles: YES 00:02:40.458 Fetching value of define "__SSE4_2__" : 1 00:02:40.458 Fetching value of define "__AES__" : 1 00:02:40.458 Fetching value of define "__AVX__" : 1 00:02:40.458 Fetching value of define "__AVX2__" : 1 00:02:40.458 Fetching value of define "__AVX512BW__" : 1 00:02:40.458 Fetching value of define "__AVX512CD__" : 1 00:02:40.458 Fetching value of define "__AVX512DQ__" : 1 00:02:40.458 Fetching value of define "__AVX512F__" : 1 00:02:40.458 Fetching value of define "__AVX512VL__" : 1 00:02:40.458 Fetching value of define "__PCLMUL__" : 1 00:02:40.458 Fetching value of define "__RDRND__" : 1 00:02:40.458 Fetching value of define "__RDSEED__" : 1 00:02:40.458 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:40.458 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.458 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.458 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.458 Checking for function "getentropy" : YES 00:02:40.458 Message: lib/eal: Defining dependency "eal" 00:02:40.458 Message: lib/ring: Defining dependency "ring" 00:02:40.458 Message: lib/rcu: Defining dependency "rcu" 00:02:40.458 Message: lib/mempool: Defining dependency "mempool" 00:02:40.458 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.458 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.458 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.458 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.458 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.458 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.458 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:40.458 Compiler for C supports arguments -mpclmul: YES 00:02:40.458 Compiler for C supports arguments -maes: YES 00:02:40.458 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.458 Compiler for C supports arguments -mavx512bw: YES 00:02:40.458 Compiler for C supports arguments -mavx512dq: YES 00:02:40.458 Compiler for C supports arguments -mavx512vl: YES 00:02:40.458 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.458 Compiler for C supports arguments -mavx2: YES 00:02:40.458 Compiler for C supports arguments -mavx: YES 00:02:40.458 Message: lib/net: Defining dependency "net" 00:02:40.458 Message: lib/meter: Defining dependency "meter" 00:02:40.458 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.458 Message: lib/pci: Defining dependency "pci" 00:02:40.458 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.458 Message: lib/metrics: Defining dependency "metrics" 00:02:40.459 Message: lib/hash: Defining dependency "hash" 00:02:40.459 Message: lib/timer: Defining dependency "timer" 00:02:40.459 Fetching value of define "__AVX2__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.459 Message: lib/acl: Defining dependency "acl" 00:02:40.459 Message: lib/bbdev: Defining dependency "bbdev" 00:02:40.459 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:40.459 Run-time dependency libelf found: YES 0.191 00:02:40.459 Message: lib/bpf: Defining dependency "bpf" 00:02:40.459 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:40.459 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.459 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.459 Message: lib/distributor: Defining dependency "distributor" 00:02:40.459 Message: lib/efd: Defining dependency "efd" 00:02:40.459 Message: lib/eventdev: Defining dependency "eventdev" 00:02:40.459 Message: lib/gpudev: Defining dependency "gpudev" 00:02:40.459 Message: lib/gro: Defining dependency "gro" 00:02:40.459 Message: lib/gso: Defining dependency "gso" 00:02:40.459 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:40.459 Message: lib/jobstats: Defining dependency "jobstats" 00:02:40.459 Message: lib/latencystats: Defining dependency "latencystats" 00:02:40.459 Message: lib/lpm: Defining dependency "lpm" 00:02:40.459 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512IFMA__" : 1 00:02:40.459 Message: lib/member: Defining dependency "member" 00:02:40.459 Message: lib/pcapng: Defining dependency "pcapng" 00:02:40.459 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.459 Message: lib/power: Defining dependency "power" 00:02:40.459 Message: lib/rawdev: Defining dependency "rawdev" 00:02:40.459 Message: lib/regexdev: Defining dependency "regexdev" 00:02:40.459 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.459 Message: lib/rib: Defining dependency "rib" 00:02:40.459 Message: lib/reorder: Defining dependency "reorder" 00:02:40.459 Message: lib/sched: Defining dependency "sched" 00:02:40.459 Message: lib/security: Defining dependency "security" 00:02:40.459 Message: lib/stack: Defining dependency "stack" 00:02:40.459 Has header "linux/userfaultfd.h" : YES 00:02:40.459 Message: lib/vhost: Defining dependency "vhost" 00:02:40.459 Message: lib/ipsec: Defining dependency "ipsec" 00:02:40.459 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.459 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.459 Message: lib/fib: Defining dependency "fib" 00:02:40.459 Message: lib/port: Defining dependency "port" 00:02:40.459 Message: lib/pdump: Defining dependency "pdump" 00:02:40.459 Message: lib/table: Defining dependency "table" 00:02:40.459 Message: lib/pipeline: Defining dependency "pipeline" 00:02:40.459 Message: lib/graph: Defining dependency "graph" 00:02:40.459 Message: lib/node: Defining dependency "node" 00:02:40.459 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.459 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.459 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.459 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.459 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:40.459 Compiler for C supports arguments -Wno-unused-value: YES 00:02:40.459 Compiler for C supports arguments -Wno-format: YES 00:02:40.459 Compiler for C supports arguments -Wno-format-security: YES 00:02:40.459 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:40.459 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:40.459 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:40.459 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:41.392 Fetching value of define "__AVX2__" : 1 (cached) 00:02:41.392 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.392 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.392 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.392 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:41.392 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:41.392 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:41.392 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.392 Configuring doxy-api.conf using configuration 00:02:41.392 Program sphinx-build found: NO 00:02:41.392 Configuring rte_build_config.h using configuration 00:02:41.392 Message: 00:02:41.392 ================= 00:02:41.392 Applications Enabled 00:02:41.392 ================= 00:02:41.392 00:02:41.392 apps: 00:02:41.392 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:41.392 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:41.392 test-security-perf, 00:02:41.392 00:02:41.392 Message: 00:02:41.392 ================= 00:02:41.392 Libraries Enabled 00:02:41.392 ================= 00:02:41.392 00:02:41.392 libs: 00:02:41.392 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:41.392 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:41.392 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:41.392 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:41.392 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:41.392 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:41.392 table, pipeline, graph, node, 00:02:41.392 00:02:41.392 Message: 00:02:41.392 =============== 00:02:41.392 Drivers Enabled 00:02:41.392 =============== 00:02:41.392 00:02:41.392 common: 00:02:41.392 00:02:41.392 bus: 00:02:41.392 pci, vdev, 00:02:41.392 mempool: 00:02:41.392 ring, 00:02:41.392 dma: 00:02:41.392 00:02:41.392 net: 00:02:41.392 i40e, 00:02:41.392 raw: 00:02:41.392 00:02:41.392 crypto: 00:02:41.392 00:02:41.392 compress: 00:02:41.392 00:02:41.392 regex: 00:02:41.392 00:02:41.392 vdpa: 00:02:41.392 00:02:41.392 event: 00:02:41.392 00:02:41.392 baseband: 00:02:41.392 00:02:41.392 gpu: 00:02:41.392 00:02:41.392 00:02:41.392 Message: 00:02:41.392 ================= 00:02:41.392 Content Skipped 00:02:41.392 ================= 00:02:41.392 00:02:41.392 apps: 00:02:41.392 00:02:41.392 libs: 00:02:41.392 kni: explicitly disabled via build config (deprecated lib) 00:02:41.392 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:41.392 00:02:41.392 drivers: 00:02:41.392 common/cpt: not in enabled drivers build config 00:02:41.392 common/dpaax: not in enabled drivers build config 00:02:41.392 common/iavf: not in enabled drivers build config 00:02:41.392 common/idpf: not in enabled drivers build config 00:02:41.392 common/mvep: not in enabled drivers build config 00:02:41.392 common/octeontx: not in enabled drivers build config 00:02:41.392 bus/auxiliary: not in enabled drivers build config 00:02:41.392 bus/dpaa: not in enabled drivers build config 00:02:41.392 bus/fslmc: not in enabled drivers build config 00:02:41.392 bus/ifpga: not in enabled drivers build config 00:02:41.392 bus/vmbus: not in enabled drivers build config 00:02:41.392 common/cnxk: not in enabled drivers build config 00:02:41.392 common/mlx5: not in enabled drivers build config 00:02:41.392 common/qat: not in enabled drivers build config 00:02:41.392 common/sfc_efx: not in enabled drivers build config 00:02:41.392 mempool/bucket: not in enabled drivers build config 00:02:41.392 mempool/cnxk: not in enabled drivers build config 00:02:41.392 mempool/dpaa: not in enabled drivers build config 00:02:41.392 mempool/dpaa2: not in enabled drivers build config 00:02:41.392 mempool/octeontx: not in enabled drivers build config 00:02:41.392 mempool/stack: not in enabled drivers build config 00:02:41.392 dma/cnxk: not in enabled drivers build config 00:02:41.392 dma/dpaa: not in enabled drivers build config 00:02:41.392 dma/dpaa2: not in enabled drivers build config 00:02:41.392 dma/hisilicon: not in enabled drivers build config 00:02:41.392 dma/idxd: not in enabled drivers build config 00:02:41.392 dma/ioat: not in enabled drivers build config 00:02:41.392 dma/skeleton: not in enabled drivers build config 00:02:41.392 net/af_packet: not in enabled drivers build config 00:02:41.392 net/af_xdp: not in enabled drivers build config 00:02:41.392 net/ark: not in enabled drivers build config 00:02:41.392 net/atlantic: not in enabled drivers build config 00:02:41.392 net/avp: not in enabled drivers build config 00:02:41.392 net/axgbe: not in enabled drivers build config 00:02:41.392 net/bnx2x: not in enabled drivers build config 00:02:41.392 net/bnxt: not in enabled drivers build config 00:02:41.392 net/bonding: not in enabled drivers build config 00:02:41.392 net/cnxk: not in enabled drivers build config 00:02:41.392 net/cxgbe: not in enabled drivers build config 00:02:41.392 net/dpaa: not in enabled drivers build config 00:02:41.392 net/dpaa2: not in enabled drivers build config 00:02:41.392 net/e1000: not in enabled drivers build config 00:02:41.392 net/ena: not in enabled drivers build config 00:02:41.392 net/enetc: not in enabled drivers build config 00:02:41.392 net/enetfec: not in enabled drivers build config 00:02:41.392 net/enic: not in enabled drivers build config 00:02:41.392 net/failsafe: not in enabled drivers build config 00:02:41.392 net/fm10k: not in enabled drivers build config 00:02:41.392 net/gve: not in enabled drivers build config 00:02:41.392 net/hinic: not in enabled drivers build config 00:02:41.392 net/hns3: not in enabled drivers build config 00:02:41.392 net/iavf: not in enabled drivers build config 00:02:41.392 net/ice: not in enabled drivers build config 00:02:41.392 net/idpf: not in enabled drivers build config 00:02:41.392 net/igc: not in enabled drivers build config 00:02:41.392 net/ionic: not in enabled drivers build config 00:02:41.392 net/ipn3ke: not in enabled drivers build config 00:02:41.392 net/ixgbe: not in enabled drivers build config 00:02:41.392 net/kni: not in enabled drivers build config 00:02:41.392 net/liquidio: not in enabled drivers build config 00:02:41.392 net/mana: not in enabled drivers build config 00:02:41.392 net/memif: not in enabled drivers build config 00:02:41.392 net/mlx4: not in enabled drivers build config 00:02:41.392 net/mlx5: not in enabled drivers build config 00:02:41.392 net/mvneta: not in enabled drivers build config 00:02:41.392 net/mvpp2: not in enabled drivers build config 00:02:41.392 net/netvsc: not in enabled drivers build config 00:02:41.392 net/nfb: not in enabled drivers build config 00:02:41.392 net/nfp: not in enabled drivers build config 00:02:41.392 net/ngbe: not in enabled drivers build config 00:02:41.392 net/null: not in enabled drivers build config 00:02:41.392 net/octeontx: not in enabled drivers build config 00:02:41.392 net/octeon_ep: not in enabled drivers build config 00:02:41.392 net/pcap: not in enabled drivers build config 00:02:41.392 net/pfe: not in enabled drivers build config 00:02:41.392 net/qede: not in enabled drivers build config 00:02:41.392 net/ring: not in enabled drivers build config 00:02:41.392 net/sfc: not in enabled drivers build config 00:02:41.392 net/softnic: not in enabled drivers build config 00:02:41.392 net/tap: not in enabled drivers build config 00:02:41.392 net/thunderx: not in enabled drivers build config 00:02:41.392 net/txgbe: not in enabled drivers build config 00:02:41.392 net/vdev_netvsc: not in enabled drivers build config 00:02:41.392 net/vhost: not in enabled drivers build config 00:02:41.392 net/virtio: not in enabled drivers build config 00:02:41.392 net/vmxnet3: not in enabled drivers build config 00:02:41.392 raw/cnxk_bphy: not in enabled drivers build config 00:02:41.392 raw/cnxk_gpio: not in enabled drivers build config 00:02:41.392 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:41.392 raw/ifpga: not in enabled drivers build config 00:02:41.392 raw/ntb: not in enabled drivers build config 00:02:41.392 raw/skeleton: not in enabled drivers build config 00:02:41.392 crypto/armv8: not in enabled drivers build config 00:02:41.392 crypto/bcmfs: not in enabled drivers build config 00:02:41.392 crypto/caam_jr: not in enabled drivers build config 00:02:41.392 crypto/ccp: not in enabled drivers build config 00:02:41.392 crypto/cnxk: not in enabled drivers build config 00:02:41.392 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.392 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.392 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.392 crypto/mlx5: not in enabled drivers build config 00:02:41.392 crypto/mvsam: not in enabled drivers build config 00:02:41.392 crypto/nitrox: not in enabled drivers build config 00:02:41.392 crypto/null: not in enabled drivers build config 00:02:41.392 crypto/octeontx: not in enabled drivers build config 00:02:41.392 crypto/openssl: not in enabled drivers build config 00:02:41.392 crypto/scheduler: not in enabled drivers build config 00:02:41.392 crypto/uadk: not in enabled drivers build config 00:02:41.392 crypto/virtio: not in enabled drivers build config 00:02:41.392 compress/isal: not in enabled drivers build config 00:02:41.392 compress/mlx5: not in enabled drivers build config 00:02:41.393 compress/octeontx: not in enabled drivers build config 00:02:41.393 compress/zlib: not in enabled drivers build config 00:02:41.393 regex/mlx5: not in enabled drivers build config 00:02:41.393 regex/cn9k: not in enabled drivers build config 00:02:41.393 vdpa/ifc: not in enabled drivers build config 00:02:41.393 vdpa/mlx5: not in enabled drivers build config 00:02:41.393 vdpa/sfc: not in enabled drivers build config 00:02:41.393 event/cnxk: not in enabled drivers build config 00:02:41.393 event/dlb2: not in enabled drivers build config 00:02:41.393 event/dpaa: not in enabled drivers build config 00:02:41.393 event/dpaa2: not in enabled drivers build config 00:02:41.393 event/dsw: not in enabled drivers build config 00:02:41.393 event/opdl: not in enabled drivers build config 00:02:41.393 event/skeleton: not in enabled drivers build config 00:02:41.393 event/sw: not in enabled drivers build config 00:02:41.393 event/octeontx: not in enabled drivers build config 00:02:41.393 baseband/acc: not in enabled drivers build config 00:02:41.393 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:41.393 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:41.393 baseband/la12xx: not in enabled drivers build config 00:02:41.393 baseband/null: not in enabled drivers build config 00:02:41.393 baseband/turbo_sw: not in enabled drivers build config 00:02:41.393 gpu/cuda: not in enabled drivers build config 00:02:41.393 00:02:41.393 00:02:41.393 Build targets in project: 309 00:02:41.393 00:02:41.393 DPDK 22.11.4 00:02:41.393 00:02:41.393 User defined options 00:02:41.393 libdir : lib 00:02:41.393 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:41.393 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:41.393 c_link_args : 00:02:41.393 enable_docs : false 00:02:41.393 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:41.393 enable_kmods : false 00:02:41.393 machine : native 00:02:41.393 tests : false 00:02:41.393 00:02:41.393 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.393 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:41.393 17:57:15 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:41.650 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:41.650 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:41.650 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:41.650 [3/738] Generating lib/rte_telemetry_def with a custom command 00:02:41.650 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:41.650 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.650 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.650 [7/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.650 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.650 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.650 [10/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.650 [11/738] Linking static target lib/librte_kvargs.a 00:02:41.650 [12/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.650 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.650 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.650 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.650 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.907 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.907 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.907 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.907 [20/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:41.907 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.907 [22/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.907 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.907 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.907 [25/738] Linking target lib/librte_kvargs.so.23.0 00:02:41.907 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.907 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.907 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:41.907 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.907 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.907 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.907 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.164 [33/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.164 [34/738] Linking static target lib/librte_telemetry.a 00:02:42.164 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.164 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.164 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.164 [38/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:42.164 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.164 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.164 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.421 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.421 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.421 [44/738] Linking target lib/librte_telemetry.so.23.0 00:02:42.421 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.421 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.421 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.421 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.421 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.421 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.421 [51/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:42.421 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.421 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.421 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.421 [55/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.421 [56/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.421 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.421 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.421 [59/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.679 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.679 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.679 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.679 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.679 [64/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.679 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.679 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:42.679 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.679 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.679 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.679 [70/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.679 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.679 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.679 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.679 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.679 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.679 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.679 [77/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.679 [78/738] Generating lib/rte_eal_def with a custom command 00:02:42.679 [79/738] Generating lib/rte_eal_mingw with a custom command 00:02:42.679 [80/738] Generating lib/rte_ring_def with a custom command 00:02:42.679 [81/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.937 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:42.937 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:42.937 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:42.937 [85/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.937 [86/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.937 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.937 [88/738] Linking static target lib/librte_ring.a 00:02:42.937 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.937 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:42.937 [91/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.937 [92/738] Generating lib/rte_mempool_mingw with a custom command 00:02:42.937 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.195 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.195 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.195 [96/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.195 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:43.195 [98/738] Generating lib/rte_mbuf_def with a custom command 00:02:43.195 [99/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.195 [100/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.195 [101/738] Linking static target lib/librte_eal.a 00:02:43.195 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.452 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.452 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.452 [105/738] Linking static target lib/librte_rcu.a 00:02:43.452 [106/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.452 [107/738] Linking static target lib/librte_mempool.a 00:02:43.452 [108/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.452 [109/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.452 [110/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.763 [111/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.763 [112/738] Generating lib/rte_net_mingw with a custom command 00:02:43.763 [113/738] Generating lib/rte_net_def with a custom command 00:02:43.763 [114/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.763 [115/738] Generating lib/rte_meter_def with a custom command 00:02:43.763 [116/738] Generating lib/rte_meter_mingw with a custom command 00:02:43.763 [117/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.763 [118/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.763 [119/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.763 [120/738] Linking static target lib/librte_meter.a 00:02:43.763 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.021 [122/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.021 [123/738] Linking static target lib/librte_mbuf.a 00:02:44.021 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.021 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.021 [126/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.021 [127/738] Linking static target lib/librte_net.a 00:02:44.021 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.021 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.021 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.021 [131/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.279 [132/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.279 [133/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.279 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.280 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.280 [136/738] Generating lib/rte_ethdev_def with a custom command 00:02:44.280 [137/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:44.280 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.539 [139/738] Generating lib/rte_pci_def with a custom command 00:02:44.539 [140/738] Generating lib/rte_pci_mingw with a custom command 00:02:44.539 [141/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.539 [142/738] Linking static target lib/librte_pci.a 00:02:44.539 [143/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.539 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.539 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.539 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.539 [147/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.539 [148/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.539 [149/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.796 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.796 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.796 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.796 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.796 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.796 [155/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.797 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.797 [157/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:44.797 [158/738] Generating lib/rte_cmdline_def with a custom command 00:02:44.797 [159/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.797 [160/738] Generating lib/rte_metrics_def with a custom command 00:02:44.797 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:02:44.797 [162/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:44.797 [163/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.797 [164/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.797 [165/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.797 [166/738] Generating lib/rte_hash_mingw with a custom command 00:02:44.797 [167/738] Generating lib/rte_hash_def with a custom command 00:02:44.797 [168/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.797 [169/738] Generating lib/rte_timer_def with a custom command 00:02:44.797 [170/738] Linking static target lib/librte_cmdline.a 00:02:44.797 [171/738] Generating lib/rte_timer_mingw with a custom command 00:02:45.054 [172/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.054 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:45.054 [174/738] Linking static target lib/librte_metrics.a 00:02:45.312 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.312 [176/738] Linking static target lib/librte_timer.a 00:02:45.312 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.570 [178/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.570 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:45.570 [180/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.570 [181/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:45.570 [182/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.570 [183/738] Generating lib/rte_acl_def with a custom command 00:02:45.570 [184/738] Generating lib/rte_acl_mingw with a custom command 00:02:45.828 [185/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:45.828 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:45.828 [187/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:45.828 [188/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:45.829 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:45.829 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:46.086 [191/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:46.086 [192/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:46.086 [193/738] Linking static target lib/librte_bitratestats.a 00:02:46.086 [194/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.086 [195/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:46.086 [196/738] Linking static target lib/librte_ethdev.a 00:02:46.086 [197/738] Linking static target lib/librte_bbdev.a 00:02:46.086 [198/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:46.086 [199/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.344 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:46.344 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:46.602 [202/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.602 [203/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:46.602 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:46.859 [205/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:46.859 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:46.859 [207/738] Generating lib/rte_bpf_def with a custom command 00:02:46.859 [208/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.859 [209/738] Generating lib/rte_bpf_mingw with a custom command 00:02:46.859 [210/738] Linking static target lib/librte_hash.a 00:02:47.117 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:47.117 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:02:47.117 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:47.117 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:47.117 [215/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:47.117 [216/738] Linking static target lib/librte_cfgfile.a 00:02:47.375 [217/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:47.375 [218/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.375 [219/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:47.375 [220/738] Generating lib/rte_compressdev_def with a custom command 00:02:47.375 [221/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.375 [222/738] Linking static target lib/librte_bpf.a 00:02:47.375 [223/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.375 [224/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:47.633 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.633 [226/738] Generating lib/rte_cryptodev_def with a custom command 00:02:47.633 [227/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:47.633 [228/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:47.633 [229/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.633 [230/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.633 [231/738] Linking static target lib/librte_compressdev.a 00:02:47.633 [232/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.633 [233/738] Generating lib/rte_distributor_def with a custom command 00:02:47.891 [234/738] Generating lib/rte_distributor_mingw with a custom command 00:02:47.891 [235/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:47.891 [236/738] Linking static target lib/librte_acl.a 00:02:47.891 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.891 [238/738] Generating lib/rte_efd_def with a custom command 00:02:47.891 [239/738] Generating lib/rte_efd_mingw with a custom command 00:02:47.891 [240/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:47.891 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:48.149 [242/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.149 [243/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:48.149 [244/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.149 [245/738] Linking target lib/librte_eal.so.23.0 00:02:48.149 [246/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.149 [247/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:48.407 [248/738] Linking static target lib/librte_distributor.a 00:02:48.407 [249/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:48.407 [250/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:48.407 [251/738] Linking target lib/librte_ring.so.23.0 00:02:48.407 [252/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:48.407 [253/738] Linking target lib/librte_rcu.so.23.0 00:02:48.407 [254/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.407 [255/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:48.407 [256/738] Linking target lib/librte_mempool.so.23.0 00:02:48.407 [257/738] Linking target lib/librte_meter.so.23.0 00:02:48.665 [258/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:48.665 [259/738] Linking target lib/librte_pci.so.23.0 00:02:48.665 [260/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:48.665 [261/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:48.665 [262/738] Linking target lib/librte_mbuf.so.23.0 00:02:48.665 [263/738] Linking target lib/librte_timer.so.23.0 00:02:48.665 [264/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:48.665 [265/738] Linking target lib/librte_acl.so.23.0 00:02:48.665 [266/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:48.665 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:48.665 [268/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:48.665 [269/738] Linking target lib/librte_cfgfile.so.23.0 00:02:48.665 [270/738] Linking target lib/librte_net.so.23.0 00:02:48.665 [271/738] Linking target lib/librte_bbdev.so.23.0 00:02:48.665 [272/738] Linking target lib/librte_compressdev.so.23.0 00:02:48.665 [273/738] Linking static target lib/librte_efd.a 00:02:48.665 [274/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:48.665 [275/738] Linking target lib/librte_distributor.so.23.0 00:02:48.665 [276/738] Generating lib/rte_eventdev_def with a custom command 00:02:48.922 [277/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:48.922 [278/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:48.922 [279/738] Linking target lib/librte_cmdline.so.23.0 00:02:48.922 [280/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:48.922 [281/738] Linking target lib/librte_hash.so.23.0 00:02:48.922 [282/738] Generating lib/rte_gpudev_def with a custom command 00:02:48.922 [283/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:48.922 [284/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.922 [285/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:48.922 [286/738] Linking target lib/librte_efd.so.23.0 00:02:48.922 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.922 [288/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.180 [289/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.180 [290/738] Linking target lib/librte_ethdev.so.23.0 00:02:49.180 [291/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.180 [292/738] Linking static target lib/librte_cryptodev.a 00:02:49.180 [293/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.180 [294/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:49.180 [295/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:49.438 [296/738] Generating lib/rte_gro_def with a custom command 00:02:49.438 [297/738] Generating lib/rte_gro_mingw with a custom command 00:02:49.438 [298/738] Linking target lib/librte_metrics.so.23.0 00:02:49.438 [299/738] Linking target lib/librte_bpf.so.23.0 00:02:49.438 [300/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:49.438 [301/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:49.438 [302/738] Linking static target lib/librte_gpudev.a 00:02:49.438 [303/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:49.438 [304/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:49.438 [305/738] Linking target lib/librte_bitratestats.so.23.0 00:02:49.438 [306/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:49.438 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:49.438 [308/738] Linking static target lib/librte_gro.a 00:02:49.696 [309/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.696 [310/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:49.696 [311/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:49.696 [312/738] Linking target lib/librte_gro.so.23.0 00:02:49.696 [313/738] Generating lib/rte_gso_def with a custom command 00:02:49.696 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:49.696 [315/738] Generating lib/rte_gso_mingw with a custom command 00:02:49.696 [316/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:49.696 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:49.696 [318/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:49.955 [319/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:49.955 [320/738] Linking static target lib/librte_gso.a 00:02:49.955 [321/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.955 [322/738] Linking target lib/librte_gpudev.so.23.0 00:02:49.955 [323/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:49.955 [324/738] Generating lib/rte_ip_frag_def with a custom command 00:02:49.955 [325/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:49.955 [326/738] Generating lib/rte_jobstats_def with a custom command 00:02:49.955 [327/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:49.955 [328/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:49.955 [329/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.955 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:49.955 [331/738] Linking static target lib/librte_eventdev.a 00:02:50.212 [332/738] Generating lib/rte_latencystats_def with a custom command 00:02:50.212 [333/738] Linking target lib/librte_gso.so.23.0 00:02:50.212 [334/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:50.212 [335/738] Linking static target lib/librte_jobstats.a 00:02:50.212 [336/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:50.212 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:50.212 [338/738] Generating lib/rte_lpm_def with a custom command 00:02:50.212 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:02:50.212 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:50.212 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:50.212 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:50.212 [343/738] Linking static target lib/librte_ip_frag.a 00:02:50.212 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.212 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:50.470 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:50.470 [347/738] Linking static target lib/librte_latencystats.a 00:02:50.470 [348/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.470 [349/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:50.470 [350/738] Linking target lib/librte_ip_frag.so.23.0 00:02:50.470 [351/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:50.470 [352/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.470 [353/738] Generating lib/rte_member_def with a custom command 00:02:50.728 [354/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:50.728 [355/738] Generating lib/rte_member_mingw with a custom command 00:02:50.728 [356/738] Linking target lib/librte_latencystats.so.23.0 00:02:50.728 [357/738] Generating lib/rte_pcapng_def with a custom command 00:02:50.728 [358/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:50.728 [359/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.728 [360/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.728 [361/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.728 [362/738] Linking target lib/librte_cryptodev.so.23.0 00:02:50.728 [363/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:50.728 [364/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:50.728 [365/738] Linking static target lib/librte_lpm.a 00:02:50.984 [366/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.984 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:50.984 [368/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:50.985 [369/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:50.985 [370/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.985 [371/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.985 [372/738] Generating lib/rte_power_def with a custom command 00:02:50.985 [373/738] Generating lib/rte_power_mingw with a custom command 00:02:50.985 [374/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:50.985 [375/738] Generating lib/rte_rawdev_def with a custom command 00:02:50.985 [376/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.985 [377/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:51.243 [378/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:51.243 [379/738] Linking target lib/librte_lpm.so.23.0 00:02:51.243 [380/738] Linking static target lib/librte_pcapng.a 00:02:51.243 [381/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.243 [382/738] Generating lib/rte_regexdev_def with a custom command 00:02:51.243 [383/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:51.243 [384/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:51.243 [385/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:51.243 [386/738] Generating lib/rte_dmadev_def with a custom command 00:02:51.243 [387/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:51.243 [388/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:51.243 [389/738] Linking static target lib/librte_rawdev.a 00:02:51.243 [390/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.243 [391/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.501 [392/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.501 [393/738] Linking target lib/librte_pcapng.so.23.0 00:02:51.501 [394/738] Generating lib/rte_rib_def with a custom command 00:02:51.501 [395/738] Linking target lib/librte_eventdev.so.23.0 00:02:51.501 [396/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.501 [397/738] Linking static target lib/librte_power.a 00:02:51.501 [398/738] Generating lib/rte_rib_mingw with a custom command 00:02:51.501 [399/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:51.501 [400/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:51.502 [401/738] Generating lib/rte_reorder_def with a custom command 00:02:51.502 [402/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.502 [403/738] Linking static target lib/librte_dmadev.a 00:02:51.502 [404/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:51.502 [405/738] Generating lib/rte_reorder_mingw with a custom command 00:02:51.502 [406/738] Linking static target lib/librte_regexdev.a 00:02:51.502 [407/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:51.502 [408/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:51.502 [409/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.759 [410/738] Linking static target lib/librte_member.a 00:02:51.759 [411/738] Linking target lib/librte_rawdev.so.23.0 00:02:51.759 [412/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:51.759 [413/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:51.759 [414/738] Generating lib/rte_sched_def with a custom command 00:02:51.759 [415/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:51.759 [416/738] Generating lib/rte_sched_mingw with a custom command 00:02:51.759 [417/738] Generating lib/rte_security_mingw with a custom command 00:02:51.759 [418/738] Generating lib/rte_security_def with a custom command 00:02:51.759 [419/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:51.759 [420/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:51.759 [421/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.759 [422/738] Linking static target lib/librte_reorder.a 00:02:51.759 [423/738] Generating lib/rte_stack_def with a custom command 00:02:51.759 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:51.759 [425/738] Linking static target lib/librte_stack.a 00:02:51.759 [426/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.759 [427/738] Generating lib/rte_stack_mingw with a custom command 00:02:51.759 [428/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:51.759 [429/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.759 [430/738] Linking static target lib/librte_rib.a 00:02:52.017 [431/738] Linking target lib/librte_member.so.23.0 00:02:52.017 [432/738] Linking target lib/librte_dmadev.so.23.0 00:02:52.017 [433/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.017 [434/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.017 [435/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:52.017 [436/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.017 [437/738] Linking target lib/librte_stack.so.23.0 00:02:52.017 [438/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.017 [439/738] Linking target lib/librte_regexdev.so.23.0 00:02:52.017 [440/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.017 [441/738] Linking target lib/librte_reorder.so.23.0 00:02:52.017 [442/738] Linking target lib/librte_power.so.23.0 00:02:52.017 [443/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:52.017 [444/738] Linking static target lib/librte_security.a 00:02:52.275 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.275 [446/738] Linking target lib/librte_rib.so.23.0 00:02:52.275 [447/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:52.275 [448/738] Generating lib/rte_vhost_def with a custom command 00:02:52.275 [449/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:52.275 [450/738] Generating lib/rte_vhost_mingw with a custom command 00:02:52.275 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:52.533 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.533 [453/738] Linking target lib/librte_security.so.23.0 00:02:52.533 [454/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:52.534 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:52.792 [456/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:52.792 [457/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:52.792 [458/738] Generating lib/rte_ipsec_def with a custom command 00:02:52.792 [459/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:52.792 [460/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:53.050 [461/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:53.050 [462/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:53.050 [463/738] Linking static target lib/librte_sched.a 00:02:53.050 [464/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.050 [465/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:53.050 [466/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:53.310 [467/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:53.310 [468/738] Generating lib/rte_fib_def with a custom command 00:02:53.310 [469/738] Generating lib/rte_fib_mingw with a custom command 00:02:53.310 [470/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.310 [471/738] Linking target lib/librte_sched.so.23.0 00:02:53.310 [472/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:53.310 [473/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:53.310 [474/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:53.310 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:53.568 [476/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:53.568 [477/738] Linking static target lib/librte_ipsec.a 00:02:53.568 [478/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:53.568 [479/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:53.825 [480/738] Linking static target lib/librte_fib.a 00:02:53.825 [481/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.825 [482/738] Linking target lib/librte_ipsec.so.23.0 00:02:53.825 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:53.825 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:53.825 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:53.825 [486/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.825 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:53.825 [488/738] Linking target lib/librte_fib.so.23.0 00:02:54.082 [489/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:54.340 [490/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:54.340 [491/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:54.340 [492/738] Generating lib/rte_port_def with a custom command 00:02:54.340 [493/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:54.340 [494/738] Generating lib/rte_port_mingw with a custom command 00:02:54.340 [495/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:54.340 [496/738] Generating lib/rte_pdump_def with a custom command 00:02:54.340 [497/738] Generating lib/rte_pdump_mingw with a custom command 00:02:54.597 [498/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:54.597 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:54.597 [500/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:54.597 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:54.597 [502/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:54.597 [503/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:54.855 [504/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:54.855 [505/738] Linking static target lib/librte_port.a 00:02:54.855 [506/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:54.855 [507/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:54.855 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:55.113 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:55.113 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:55.113 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:55.113 [512/738] Linking static target lib/librte_pdump.a 00:02:55.370 [513/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.370 [514/738] Linking target lib/librte_port.so.23.0 00:02:55.370 [515/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.370 [516/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:55.370 [517/738] Linking target lib/librte_pdump.so.23.0 00:02:55.370 [518/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:55.370 [519/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:55.370 [520/738] Generating lib/rte_table_def with a custom command 00:02:55.370 [521/738] Generating lib/rte_table_mingw with a custom command 00:02:55.370 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.628 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:55.628 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:55.628 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:55.628 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:55.628 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:55.628 [528/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:55.628 [529/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:55.628 [530/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:55.628 [531/738] Linking static target lib/librte_table.a 00:02:55.885 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:56.142 [533/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.142 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:56.142 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:56.142 [536/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.142 [537/738] Linking target lib/librte_table.so.23.0 00:02:56.142 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:56.142 [539/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:56.142 [540/738] Generating lib/rte_graph_def with a custom command 00:02:56.142 [541/738] Generating lib/rte_graph_mingw with a custom command 00:02:56.400 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:56.400 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.400 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.400 [545/738] Linking static target lib/librte_graph.a 00:02:56.657 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.657 [547/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:56.657 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:56.657 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:56.657 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:56.915 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:56.915 [552/738] Generating lib/rte_node_def with a custom command 00:02:56.915 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:56.915 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:56.915 [555/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.915 [556/738] Linking target lib/librte_graph.so.23.0 00:02:56.915 [557/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.915 [558/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:57.172 [559/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.172 [560/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:57.172 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.172 [562/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.172 [563/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:57.172 [564/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:57.172 [565/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.172 [566/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:57.172 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:57.172 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:57.172 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:57.172 [570/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.172 [571/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:57.172 [572/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.172 [573/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.430 [574/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.430 [575/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:57.430 [576/738] Linking static target lib/librte_node.a 00:02:57.430 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.430 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.430 [579/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.430 [580/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.430 [581/738] Linking static target drivers/librte_bus_vdev.a 00:02:57.430 [582/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.430 [583/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.430 [584/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.430 [585/738] Linking target lib/librte_node.so.23.0 00:02:57.430 [586/738] Linking static target drivers/librte_bus_pci.a 00:02:57.430 [587/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.430 [588/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.700 [589/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.700 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:57.700 [591/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:57.700 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:57.700 [593/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.700 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.700 [595/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:57.700 [596/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:58.008 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:58.008 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.008 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.008 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:58.008 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.008 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.008 [603/738] Linking static target drivers/librte_mempool_ring.a 00:02:58.008 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.008 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:58.265 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:58.523 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:58.523 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.523 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:58.781 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:59.039 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:59.039 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:59.039 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.039 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:59.297 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:59.297 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:59.297 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:59.297 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:59.297 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:59.862 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:00.120 [621/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:00.120 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:00.120 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:00.120 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:00.378 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:00.378 [626/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:00.378 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:00.378 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:00.637 [629/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:00.637 [630/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:00.637 [631/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:00.894 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:00.894 [633/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:00.894 [634/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:00.894 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:00.894 [636/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:01.152 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:01.152 [638/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:01.152 [639/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.152 [640/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:01.152 [641/738] Linking static target drivers/librte_net_i40e.a 00:03:01.152 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:01.152 [643/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.152 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:01.409 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:01.409 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:01.667 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:01.667 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:01.667 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.667 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:01.925 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:01.925 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:01.925 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:01.925 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:01.925 [655/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:01.925 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:01.925 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:02.182 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.182 [659/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.182 [660/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.182 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.448 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:02.448 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:02.448 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:02.448 [665/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.448 [666/738] Linking static target lib/librte_vhost.a 00:03:02.717 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:02.974 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:02.974 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:02.974 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.232 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.232 [672/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.232 [673/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.232 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:03.232 [675/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.232 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.232 [677/738] Linking target lib/librte_vhost.so.23.0 00:03:03.489 [678/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:03.489 [679/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:03.489 [680/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:03.489 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:03.489 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:03.489 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:03.746 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:03.746 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:03.746 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:03.746 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:03.746 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:04.003 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:04.003 [690/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:04.260 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:04.260 [692/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:04.260 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:04.260 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.517 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:04.517 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:04.773 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:04.773 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:04.773 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:05.030 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:05.030 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:05.030 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:05.030 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:05.287 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:05.287 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:05.287 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:05.544 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:05.802 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:05.802 [709/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:05.802 [710/738] Linking static target lib/librte_pipeline.a 00:03:05.802 [711/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:05.802 [712/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:05.802 [713/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:05.802 [714/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:05.802 [715/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:05.802 [716/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:06.060 [717/738] Linking target app/dpdk-dumpcap 00:03:06.060 [718/738] Linking target app/dpdk-pdump 00:03:06.060 [719/738] Linking target app/dpdk-proc-info 00:03:06.060 [720/738] Linking target app/dpdk-test-acl 00:03:06.060 [721/738] Linking target app/dpdk-test-bbdev 00:03:06.060 [722/738] Linking target app/dpdk-test-cmdline 00:03:06.060 [723/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:06.318 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:06.318 [725/738] Linking target app/dpdk-test-compress-perf 00:03:06.318 [726/738] Linking target app/dpdk-test-crypto-perf 00:03:06.318 [727/738] Linking target app/dpdk-test-eventdev 00:03:06.318 [728/738] Linking target app/dpdk-test-fib 00:03:06.318 [729/738] Linking target app/dpdk-test-flow-perf 00:03:06.318 [730/738] Linking target app/dpdk-test-pipeline 00:03:06.576 [731/738] Linking target app/dpdk-test-gpudev 00:03:06.576 [732/738] Linking target app/dpdk-testpmd 00:03:06.576 [733/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:06.576 [734/738] Linking target app/dpdk-test-regex 00:03:06.576 [735/738] Linking target app/dpdk-test-sad 00:03:06.833 [736/738] Linking target app/dpdk-test-security-perf 00:03:08.731 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.731 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:08.731 17:57:42 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:08.731 17:57:42 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:08.731 17:57:42 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:08.731 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:08.731 [0/1] Installing files. 00:03:08.994 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:08.994 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.994 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.994 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.994 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.994 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.994 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.995 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.996 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.997 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.998 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.999 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.999 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:08.999 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.000 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.000 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.000 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.000 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.000 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.000 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.297 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.298 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.298 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:09.298 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:09.298 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:09.298 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:09.298 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:09.298 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:09.298 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:09.298 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:09.298 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:09.298 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:09.298 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:09.298 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:09.298 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:09.298 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:09.298 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:09.298 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:09.298 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:09.298 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:09.298 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:09.298 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:09.298 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:09.298 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:09.298 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:09.298 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:09.298 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:09.298 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:09.298 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:09.298 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:09.298 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:09.298 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:09.298 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:09.298 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:09.298 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:09.298 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:09.298 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:09.298 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:09.298 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:09.298 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:09.298 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:09.298 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:09.298 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:09.298 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:09.298 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:09.298 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:09.298 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:09.298 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:09.298 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:09.298 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:09.298 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:09.298 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:09.298 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:09.298 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:09.298 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:09.298 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:09.298 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:09.298 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:09.298 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:09.298 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:09.298 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:09.298 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:09.298 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:09.298 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:09.298 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:09.298 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:09.298 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:09.298 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:09.298 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:09.298 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:09.298 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:09.298 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:09.298 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:09.298 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:09.298 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:09.298 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:09.298 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:09.298 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:09.298 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:09.298 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:09.298 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:09.298 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:09.298 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:09.298 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:09.298 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:09.298 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:09.298 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:09.298 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:09.298 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:09.298 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:09.299 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:09.299 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:09.299 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:09.299 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:09.299 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:09.299 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:09.299 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:09.299 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:09.299 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:09.299 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:09.299 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:09.299 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:09.299 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:09.299 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:09.299 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:09.299 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:09.299 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:09.299 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:09.299 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:09.299 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:09.299 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:09.299 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:09.299 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:09.299 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:09.299 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:09.299 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:09.299 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:09.299 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:09.299 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:09.299 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:09.299 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:09.299 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:09.299 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:09.299 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:09.299 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:09.299 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:09.299 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:09.299 17:57:43 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:09.299 17:57:43 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:09.299 00:03:09.299 real 0m32.967s 00:03:09.299 user 3m36.740s 00:03:09.299 sys 0m33.070s 00:03:09.299 17:57:43 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.299 17:57:43 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:09.299 ************************************ 00:03:09.299 END TEST build_native_dpdk 00:03:09.299 ************************************ 00:03:09.299 17:57:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.299 17:57:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.299 17:57:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.299 17:57:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.299 17:57:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.299 17:57:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.299 17:57:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.299 17:57:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:09.299 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:09.557 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.557 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:09.557 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:09.815 Using 'verbs' RDMA provider 00:03:20.778 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:32.996 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:32.996 Creating mk/config.mk...done. 00:03:32.996 Creating mk/cc.flags.mk...done. 00:03:32.996 Type 'make' to build. 00:03:32.996 17:58:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:32.996 17:58:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:32.996 17:58:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:32.996 17:58:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.997 ************************************ 00:03:32.997 START TEST make 00:03:32.997 ************************************ 00:03:32.997 17:58:06 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:32.997 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:32.997 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:32.997 meson setup builddir \ 00:03:32.997 -Dwith-libaio=enabled \ 00:03:32.997 -Dwith-liburing=enabled \ 00:03:32.997 -Dwith-libvfn=disabled \ 00:03:32.997 -Dwith-spdk=disabled \ 00:03:32.997 -Dexamples=false \ 00:03:32.997 -Dtests=false \ 00:03:32.997 -Dtools=false && \ 00:03:32.997 meson compile -C builddir && \ 00:03:32.997 cd -) 00:03:34.383 The Meson build system 00:03:34.383 Version: 1.5.0 00:03:34.383 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:34.383 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:34.383 Build type: native build 00:03:34.383 Project name: xnvme 00:03:34.383 Project version: 0.7.5 00:03:34.383 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:34.383 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:34.383 Host machine cpu family: x86_64 00:03:34.383 Host machine cpu: x86_64 00:03:34.383 Message: host_machine.system: linux 00:03:34.383 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:34.383 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:34.383 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:34.383 Run-time dependency threads found: YES 00:03:34.383 Has header "setupapi.h" : NO 00:03:34.383 Has header "linux/blkzoned.h" : YES 00:03:34.383 Has header "linux/blkzoned.h" : YES (cached) 00:03:34.383 Has header "libaio.h" : YES 00:03:34.383 Library aio found: YES 00:03:34.383 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:34.383 Run-time dependency liburing found: YES 2.2 00:03:34.383 Dependency libvfn skipped: feature with-libvfn disabled 00:03:34.383 Found CMake: /usr/bin/cmake (3.27.7) 00:03:34.383 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:34.383 Subproject spdk : skipped: feature with-spdk disabled 00:03:34.383 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.383 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.383 Library rt found: YES 00:03:34.383 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:34.383 Configuring xnvme_config.h using configuration 00:03:34.383 Configuring xnvme.spec using configuration 00:03:34.383 Run-time dependency bash-completion found: YES 2.11 00:03:34.383 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:34.383 Program cp found: YES (/usr/bin/cp) 00:03:34.383 Build targets in project: 3 00:03:34.383 00:03:34.383 xnvme 0.7.5 00:03:34.383 00:03:34.383 Subprojects 00:03:34.383 spdk : NO Feature 'with-spdk' disabled 00:03:34.383 00:03:34.383 User defined options 00:03:34.383 examples : false 00:03:34.383 tests : false 00:03:34.383 tools : false 00:03:34.383 with-libaio : enabled 00:03:34.383 with-liburing: enabled 00:03:34.383 with-libvfn : disabled 00:03:34.383 with-spdk : disabled 00:03:34.383 00:03:34.383 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.644 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:34.644 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:34.644 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:34.644 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:34.644 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.644 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:34.644 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:34.644 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.644 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:34.644 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:34.644 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:34.905 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.905 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:34.905 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:34.905 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:34.905 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:34.905 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:34.905 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:34.905 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:34.905 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:34.905 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:34.905 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:34.905 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:34.905 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:34.905 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:34.905 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:34.905 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:34.905 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:34.905 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:34.905 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:34.905 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:34.905 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:34.905 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:34.905 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:34.905 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:34.905 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:34.905 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:34.905 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:34.905 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:34.905 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:35.166 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:35.166 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:35.166 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:35.166 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:35.166 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:35.166 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:35.166 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:35.166 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:35.166 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:35.166 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:35.166 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:35.166 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:35.166 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:35.166 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:35.166 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:35.166 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:35.166 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:35.166 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:35.166 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:35.166 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:35.166 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:35.166 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:35.166 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:35.166 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:35.166 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:35.166 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:35.166 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:35.427 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:35.427 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:35.427 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:35.427 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:35.427 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:35.427 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:35.427 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:35.688 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:35.688 [75/76] Linking static target lib/libxnvme.a 00:03:35.688 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:35.688 INFO: autodetecting backend as ninja 00:03:35.688 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:35.688 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:07.753 CC lib/ut_mock/mock.o 00:04:07.753 CC lib/log/log_flags.o 00:04:07.753 CC lib/log/log.o 00:04:07.753 CC lib/log/log_deprecated.o 00:04:07.753 CC lib/ut/ut.o 00:04:07.753 LIB libspdk_ut_mock.a 00:04:07.753 SO libspdk_ut_mock.so.6.0 00:04:07.753 LIB libspdk_log.a 00:04:07.753 LIB libspdk_ut.a 00:04:07.753 SO libspdk_ut.so.2.0 00:04:07.753 SO libspdk_log.so.7.1 00:04:07.753 SYMLINK libspdk_ut_mock.so 00:04:07.753 SYMLINK libspdk_ut.so 00:04:07.753 SYMLINK libspdk_log.so 00:04:07.753 CXX lib/trace_parser/trace.o 00:04:07.753 CC lib/util/base64.o 00:04:07.753 CC lib/util/bit_array.o 00:04:07.753 CC lib/util/cpuset.o 00:04:07.753 CC lib/ioat/ioat.o 00:04:07.753 CC lib/util/crc16.o 00:04:07.753 CC lib/util/crc32.o 00:04:07.753 CC lib/util/crc32c.o 00:04:07.753 CC lib/dma/dma.o 00:04:07.753 CC lib/vfio_user/host/vfio_user_pci.o 00:04:07.753 CC lib/vfio_user/host/vfio_user.o 00:04:07.753 CC lib/util/crc32_ieee.o 00:04:07.753 CC lib/util/crc64.o 00:04:07.753 CC lib/util/dif.o 00:04:07.753 CC lib/util/fd.o 00:04:07.753 CC lib/util/fd_group.o 00:04:07.753 CC lib/util/file.o 00:04:07.753 CC lib/util/hexlify.o 00:04:07.753 LIB libspdk_dma.a 00:04:07.753 SO libspdk_dma.so.5.0 00:04:07.753 LIB libspdk_ioat.a 00:04:07.753 CC lib/util/iov.o 00:04:07.753 SO libspdk_ioat.so.7.0 00:04:07.753 SYMLINK libspdk_dma.so 00:04:07.753 CC lib/util/math.o 00:04:07.753 CC lib/util/net.o 00:04:07.753 LIB libspdk_vfio_user.a 00:04:07.753 CC lib/util/pipe.o 00:04:07.753 SYMLINK libspdk_ioat.so 00:04:07.753 CC lib/util/strerror_tls.o 00:04:07.753 CC lib/util/string.o 00:04:07.753 SO libspdk_vfio_user.so.5.0 00:04:07.753 SYMLINK libspdk_vfio_user.so 00:04:07.753 CC lib/util/uuid.o 00:04:07.753 CC lib/util/xor.o 00:04:07.753 CC lib/util/zipf.o 00:04:07.753 CC lib/util/md5.o 00:04:07.753 LIB libspdk_trace_parser.a 00:04:07.753 SO libspdk_trace_parser.so.6.0 00:04:07.753 LIB libspdk_util.a 00:04:07.753 SYMLINK libspdk_trace_parser.so 00:04:07.753 SO libspdk_util.so.10.1 00:04:07.753 SYMLINK libspdk_util.so 00:04:07.753 CC lib/rdma_utils/rdma_utils.o 00:04:07.753 CC lib/idxd/idxd.o 00:04:07.753 CC lib/env_dpdk/env.o 00:04:07.753 CC lib/env_dpdk/memory.o 00:04:07.753 CC lib/idxd/idxd_user.o 00:04:07.753 CC lib/idxd/idxd_kernel.o 00:04:07.753 CC lib/env_dpdk/pci.o 00:04:07.753 CC lib/conf/conf.o 00:04:07.753 CC lib/json/json_parse.o 00:04:07.753 CC lib/vmd/vmd.o 00:04:07.753 CC lib/vmd/led.o 00:04:07.753 LIB libspdk_conf.a 00:04:07.753 CC lib/json/json_util.o 00:04:07.753 CC lib/env_dpdk/init.o 00:04:07.753 SO libspdk_conf.so.6.0 00:04:07.753 LIB libspdk_rdma_utils.a 00:04:07.753 SYMLINK libspdk_conf.so 00:04:07.753 CC lib/json/json_write.o 00:04:07.753 CC lib/env_dpdk/threads.o 00:04:07.753 SO libspdk_rdma_utils.so.1.0 00:04:08.010 SYMLINK libspdk_rdma_utils.so 00:04:08.010 CC lib/env_dpdk/pci_ioat.o 00:04:08.010 CC lib/env_dpdk/pci_virtio.o 00:04:08.010 CC lib/env_dpdk/pci_vmd.o 00:04:08.010 CC lib/env_dpdk/pci_idxd.o 00:04:08.010 CC lib/env_dpdk/pci_event.o 00:04:08.010 CC lib/env_dpdk/sigbus_handler.o 00:04:08.010 CC lib/env_dpdk/pci_dpdk.o 00:04:08.010 CC lib/rdma_provider/common.o 00:04:08.010 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:08.010 LIB libspdk_json.a 00:04:08.010 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:08.268 SO libspdk_json.so.6.0 00:04:08.268 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:08.268 SYMLINK libspdk_json.so 00:04:08.268 LIB libspdk_idxd.a 00:04:08.268 LIB libspdk_vmd.a 00:04:08.268 SO libspdk_idxd.so.12.1 00:04:08.268 LIB libspdk_rdma_provider.a 00:04:08.268 SO libspdk_vmd.so.6.0 00:04:08.268 SO libspdk_rdma_provider.so.7.0 00:04:08.268 SYMLINK libspdk_idxd.so 00:04:08.268 SYMLINK libspdk_vmd.so 00:04:08.268 SYMLINK libspdk_rdma_provider.so 00:04:08.268 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:08.268 CC lib/jsonrpc/jsonrpc_server.o 00:04:08.268 CC lib/jsonrpc/jsonrpc_client.o 00:04:08.268 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:08.525 LIB libspdk_jsonrpc.a 00:04:08.782 SO libspdk_jsonrpc.so.6.0 00:04:08.782 SYMLINK libspdk_jsonrpc.so 00:04:09.040 LIB libspdk_env_dpdk.a 00:04:09.040 CC lib/rpc/rpc.o 00:04:09.040 SO libspdk_env_dpdk.so.15.1 00:04:09.040 SYMLINK libspdk_env_dpdk.so 00:04:09.298 LIB libspdk_rpc.a 00:04:09.298 SO libspdk_rpc.so.6.0 00:04:09.298 SYMLINK libspdk_rpc.so 00:04:09.556 CC lib/trace/trace.o 00:04:09.556 CC lib/trace/trace_flags.o 00:04:09.556 CC lib/trace/trace_rpc.o 00:04:09.556 CC lib/keyring/keyring.o 00:04:09.556 CC lib/keyring/keyring_rpc.o 00:04:09.556 CC lib/notify/notify_rpc.o 00:04:09.556 CC lib/notify/notify.o 00:04:09.556 LIB libspdk_notify.a 00:04:09.556 SO libspdk_notify.so.6.0 00:04:09.813 SYMLINK libspdk_notify.so 00:04:09.813 LIB libspdk_trace.a 00:04:09.813 LIB libspdk_keyring.a 00:04:09.813 SO libspdk_keyring.so.2.0 00:04:09.813 SO libspdk_trace.so.11.0 00:04:09.813 SYMLINK libspdk_keyring.so 00:04:09.813 SYMLINK libspdk_trace.so 00:04:10.071 CC lib/sock/sock.o 00:04:10.071 CC lib/sock/sock_rpc.o 00:04:10.071 CC lib/thread/thread.o 00:04:10.071 CC lib/thread/iobuf.o 00:04:10.636 LIB libspdk_sock.a 00:04:10.636 SO libspdk_sock.so.10.0 00:04:10.636 SYMLINK libspdk_sock.so 00:04:10.894 CC lib/nvme/nvme_ctrlr.o 00:04:10.894 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:10.894 CC lib/nvme/nvme_fabric.o 00:04:10.894 CC lib/nvme/nvme_ns_cmd.o 00:04:10.894 CC lib/nvme/nvme_ns.o 00:04:10.894 CC lib/nvme/nvme_pcie_common.o 00:04:10.894 CC lib/nvme/nvme_pcie.o 00:04:10.894 CC lib/nvme/nvme.o 00:04:10.894 CC lib/nvme/nvme_qpair.o 00:04:11.459 CC lib/nvme/nvme_quirks.o 00:04:11.459 CC lib/nvme/nvme_transport.o 00:04:11.459 CC lib/nvme/nvme_discovery.o 00:04:11.459 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:11.717 LIB libspdk_thread.a 00:04:11.717 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:11.717 SO libspdk_thread.so.11.0 00:04:11.717 CC lib/nvme/nvme_tcp.o 00:04:11.717 CC lib/nvme/nvme_opal.o 00:04:11.717 SYMLINK libspdk_thread.so 00:04:11.717 CC lib/nvme/nvme_io_msg.o 00:04:11.717 CC lib/nvme/nvme_poll_group.o 00:04:11.975 CC lib/nvme/nvme_zns.o 00:04:11.975 CC lib/nvme/nvme_stubs.o 00:04:11.975 CC lib/nvme/nvme_auth.o 00:04:12.234 CC lib/nvme/nvme_cuse.o 00:04:12.234 CC lib/nvme/nvme_rdma.o 00:04:12.492 CC lib/accel/accel.o 00:04:12.492 CC lib/accel/accel_rpc.o 00:04:12.492 CC lib/blob/blobstore.o 00:04:12.492 CC lib/virtio/virtio.o 00:04:12.492 CC lib/init/json_config.o 00:04:12.749 CC lib/virtio/virtio_vhost_user.o 00:04:12.749 CC lib/init/subsystem.o 00:04:12.749 CC lib/init/subsystem_rpc.o 00:04:12.749 CC lib/init/rpc.o 00:04:12.749 CC lib/blob/request.o 00:04:13.009 CC lib/blob/zeroes.o 00:04:13.009 CC lib/blob/blob_bs_dev.o 00:04:13.009 CC lib/virtio/virtio_vfio_user.o 00:04:13.009 LIB libspdk_init.a 00:04:13.009 SO libspdk_init.so.6.0 00:04:13.009 CC lib/accel/accel_sw.o 00:04:13.009 SYMLINK libspdk_init.so 00:04:13.009 CC lib/virtio/virtio_pci.o 00:04:13.009 CC lib/fsdev/fsdev_io.o 00:04:13.009 CC lib/fsdev/fsdev.o 00:04:13.267 CC lib/fsdev/fsdev_rpc.o 00:04:13.267 CC lib/event/app.o 00:04:13.267 CC lib/event/reactor.o 00:04:13.267 CC lib/event/log_rpc.o 00:04:13.267 CC lib/event/app_rpc.o 00:04:13.267 LIB libspdk_virtio.a 00:04:13.524 CC lib/event/scheduler_static.o 00:04:13.524 SO libspdk_virtio.so.7.0 00:04:13.524 SYMLINK libspdk_virtio.so 00:04:13.524 LIB libspdk_accel.a 00:04:13.524 SO libspdk_accel.so.16.0 00:04:13.524 LIB libspdk_fsdev.a 00:04:13.781 SO libspdk_fsdev.so.2.0 00:04:13.781 LIB libspdk_nvme.a 00:04:13.781 SYMLINK libspdk_accel.so 00:04:13.781 LIB libspdk_event.a 00:04:13.781 SO libspdk_event.so.14.0 00:04:13.781 SYMLINK libspdk_fsdev.so 00:04:13.781 SO libspdk_nvme.so.15.0 00:04:13.781 SYMLINK libspdk_event.so 00:04:13.781 CC lib/bdev/bdev.o 00:04:13.781 CC lib/bdev/bdev_zone.o 00:04:13.781 CC lib/bdev/scsi_nvme.o 00:04:13.781 CC lib/bdev/part.o 00:04:13.781 CC lib/bdev/bdev_rpc.o 00:04:14.038 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:14.038 SYMLINK libspdk_nvme.so 00:04:14.602 LIB libspdk_fuse_dispatcher.a 00:04:14.602 SO libspdk_fuse_dispatcher.so.1.0 00:04:14.602 SYMLINK libspdk_fuse_dispatcher.so 00:04:15.169 LIB libspdk_blob.a 00:04:15.428 SO libspdk_blob.so.12.0 00:04:15.428 SYMLINK libspdk_blob.so 00:04:15.688 CC lib/lvol/lvol.o 00:04:15.688 CC lib/blobfs/tree.o 00:04:15.688 CC lib/blobfs/blobfs.o 00:04:16.621 LIB libspdk_blobfs.a 00:04:16.621 SO libspdk_blobfs.so.11.0 00:04:16.621 SYMLINK libspdk_blobfs.so 00:04:16.621 LIB libspdk_lvol.a 00:04:16.621 SO libspdk_lvol.so.11.0 00:04:16.621 LIB libspdk_bdev.a 00:04:16.621 SYMLINK libspdk_lvol.so 00:04:16.621 SO libspdk_bdev.so.17.0 00:04:16.878 SYMLINK libspdk_bdev.so 00:04:16.878 CC lib/ublk/ublk_rpc.o 00:04:16.878 CC lib/ublk/ublk.o 00:04:16.878 CC lib/nvmf/ctrlr.o 00:04:16.878 CC lib/nvmf/ctrlr_discovery.o 00:04:16.878 CC lib/nvmf/subsystem.o 00:04:16.878 CC lib/nvmf/ctrlr_bdev.o 00:04:16.878 CC lib/scsi/dev.o 00:04:16.878 CC lib/scsi/lun.o 00:04:16.878 CC lib/nbd/nbd.o 00:04:16.878 CC lib/ftl/ftl_core.o 00:04:17.136 CC lib/ftl/ftl_init.o 00:04:17.136 CC lib/ftl/ftl_layout.o 00:04:17.136 CC lib/scsi/port.o 00:04:17.393 CC lib/scsi/scsi.o 00:04:17.393 CC lib/scsi/scsi_bdev.o 00:04:17.393 CC lib/nbd/nbd_rpc.o 00:04:17.393 CC lib/scsi/scsi_pr.o 00:04:17.393 CC lib/scsi/scsi_rpc.o 00:04:17.393 CC lib/scsi/task.o 00:04:17.393 CC lib/ftl/ftl_debug.o 00:04:17.650 LIB libspdk_nbd.a 00:04:17.650 SO libspdk_nbd.so.7.0 00:04:17.650 CC lib/ftl/ftl_io.o 00:04:17.650 LIB libspdk_ublk.a 00:04:17.650 SYMLINK libspdk_nbd.so 00:04:17.650 CC lib/ftl/ftl_sb.o 00:04:17.650 SO libspdk_ublk.so.3.0 00:04:17.650 CC lib/ftl/ftl_l2p.o 00:04:17.650 SYMLINK libspdk_ublk.so 00:04:17.650 CC lib/nvmf/nvmf.o 00:04:17.650 CC lib/nvmf/nvmf_rpc.o 00:04:17.650 CC lib/ftl/ftl_l2p_flat.o 00:04:17.650 CC lib/ftl/ftl_nv_cache.o 00:04:17.936 CC lib/ftl/ftl_band.o 00:04:17.936 CC lib/ftl/ftl_band_ops.o 00:04:17.936 CC lib/ftl/ftl_writer.o 00:04:17.936 LIB libspdk_scsi.a 00:04:17.936 SO libspdk_scsi.so.9.0 00:04:17.936 CC lib/nvmf/transport.o 00:04:17.936 SYMLINK libspdk_scsi.so 00:04:17.936 CC lib/nvmf/tcp.o 00:04:18.196 CC lib/ftl/ftl_rq.o 00:04:18.196 CC lib/nvmf/stubs.o 00:04:18.196 CC lib/nvmf/mdns_server.o 00:04:18.454 CC lib/vhost/vhost.o 00:04:18.454 CC lib/iscsi/conn.o 00:04:18.454 CC lib/iscsi/init_grp.o 00:04:18.454 CC lib/iscsi/iscsi.o 00:04:18.712 CC lib/iscsi/param.o 00:04:18.712 CC lib/iscsi/portal_grp.o 00:04:18.712 CC lib/iscsi/tgt_node.o 00:04:18.712 CC lib/ftl/ftl_reloc.o 00:04:18.712 CC lib/ftl/ftl_l2p_cache.o 00:04:18.712 CC lib/ftl/ftl_p2l.o 00:04:18.970 CC lib/iscsi/iscsi_subsystem.o 00:04:18.970 CC lib/nvmf/rdma.o 00:04:18.970 CC lib/nvmf/auth.o 00:04:18.970 CC lib/iscsi/iscsi_rpc.o 00:04:18.970 CC lib/iscsi/task.o 00:04:19.228 CC lib/vhost/vhost_rpc.o 00:04:19.228 CC lib/ftl/ftl_p2l_log.o 00:04:19.228 CC lib/vhost/vhost_scsi.o 00:04:19.228 CC lib/vhost/vhost_blk.o 00:04:19.486 CC lib/ftl/mngt/ftl_mngt.o 00:04:19.486 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:19.486 CC lib/vhost/rte_vhost_user.o 00:04:19.486 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:19.744 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:19.744 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:19.744 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:19.744 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:19.744 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:19.744 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:20.002 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:20.002 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:20.002 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:20.002 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:20.002 LIB libspdk_iscsi.a 00:04:20.002 CC lib/ftl/utils/ftl_conf.o 00:04:20.002 CC lib/ftl/utils/ftl_md.o 00:04:20.002 SO libspdk_iscsi.so.8.0 00:04:20.002 CC lib/ftl/utils/ftl_mempool.o 00:04:20.002 CC lib/ftl/utils/ftl_bitmap.o 00:04:20.002 CC lib/ftl/utils/ftl_property.o 00:04:20.260 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:20.260 SYMLINK libspdk_iscsi.so 00:04:20.260 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:20.260 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:20.260 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:20.260 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:20.260 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:20.260 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:20.519 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:20.519 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:20.519 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:20.519 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:20.519 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:20.519 LIB libspdk_vhost.a 00:04:20.519 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:20.519 SO libspdk_vhost.so.8.0 00:04:20.519 CC lib/ftl/base/ftl_base_dev.o 00:04:20.519 CC lib/ftl/base/ftl_base_bdev.o 00:04:20.519 CC lib/ftl/ftl_trace.o 00:04:20.519 SYMLINK libspdk_vhost.so 00:04:20.777 LIB libspdk_ftl.a 00:04:21.034 SO libspdk_ftl.so.9.0 00:04:21.034 SYMLINK libspdk_ftl.so 00:04:21.291 LIB libspdk_nvmf.a 00:04:21.291 SO libspdk_nvmf.so.20.0 00:04:21.548 SYMLINK libspdk_nvmf.so 00:04:21.806 CC module/env_dpdk/env_dpdk_rpc.o 00:04:21.806 CC module/accel/ioat/accel_ioat.o 00:04:21.806 CC module/accel/error/accel_error.o 00:04:21.806 CC module/fsdev/aio/fsdev_aio.o 00:04:21.806 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:21.806 CC module/sock/posix/posix.o 00:04:21.806 CC module/blob/bdev/blob_bdev.o 00:04:21.806 CC module/accel/iaa/accel_iaa.o 00:04:21.806 CC module/keyring/file/keyring.o 00:04:21.806 CC module/accel/dsa/accel_dsa.o 00:04:21.806 LIB libspdk_env_dpdk_rpc.a 00:04:21.806 SO libspdk_env_dpdk_rpc.so.6.0 00:04:21.806 SYMLINK libspdk_env_dpdk_rpc.so 00:04:21.806 CC module/keyring/file/keyring_rpc.o 00:04:22.063 CC module/accel/ioat/accel_ioat_rpc.o 00:04:22.063 LIB libspdk_scheduler_dynamic.a 00:04:22.063 SO libspdk_scheduler_dynamic.so.4.0 00:04:22.063 CC module/accel/error/accel_error_rpc.o 00:04:22.063 CC module/accel/iaa/accel_iaa_rpc.o 00:04:22.063 LIB libspdk_keyring_file.a 00:04:22.063 LIB libspdk_blob_bdev.a 00:04:22.063 SO libspdk_keyring_file.so.2.0 00:04:22.063 SYMLINK libspdk_scheduler_dynamic.so 00:04:22.063 SO libspdk_blob_bdev.so.12.0 00:04:22.063 SYMLINK libspdk_keyring_file.so 00:04:22.063 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:22.063 CC module/keyring/linux/keyring.o 00:04:22.063 LIB libspdk_accel_ioat.a 00:04:22.063 CC module/accel/dsa/accel_dsa_rpc.o 00:04:22.063 SYMLINK libspdk_blob_bdev.so 00:04:22.063 SO libspdk_accel_ioat.so.6.0 00:04:22.063 CC module/keyring/linux/keyring_rpc.o 00:04:22.063 LIB libspdk_accel_iaa.a 00:04:22.063 LIB libspdk_accel_error.a 00:04:22.063 SYMLINK libspdk_accel_ioat.so 00:04:22.063 SO libspdk_accel_iaa.so.3.0 00:04:22.063 SO libspdk_accel_error.so.2.0 00:04:22.063 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:22.063 LIB libspdk_accel_dsa.a 00:04:22.063 SYMLINK libspdk_accel_error.so 00:04:22.321 SYMLINK libspdk_accel_iaa.so 00:04:22.321 CC module/fsdev/aio/linux_aio_mgr.o 00:04:22.321 SO libspdk_accel_dsa.so.5.0 00:04:22.321 LIB libspdk_keyring_linux.a 00:04:22.321 SO libspdk_keyring_linux.so.1.0 00:04:22.321 SYMLINK libspdk_accel_dsa.so 00:04:22.321 LIB libspdk_scheduler_dpdk_governor.a 00:04:22.321 SYMLINK libspdk_keyring_linux.so 00:04:22.321 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:22.321 CC module/scheduler/gscheduler/gscheduler.o 00:04:22.321 CC module/bdev/delay/vbdev_delay.o 00:04:22.321 CC module/bdev/error/vbdev_error.o 00:04:22.321 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:22.321 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:22.321 CC module/bdev/error/vbdev_error_rpc.o 00:04:22.321 LIB libspdk_sock_posix.a 00:04:22.321 CC module/bdev/gpt/gpt.o 00:04:22.321 SO libspdk_sock_posix.so.6.0 00:04:22.321 CC module/blobfs/bdev/blobfs_bdev.o 00:04:22.606 LIB libspdk_fsdev_aio.a 00:04:22.606 CC module/bdev/lvol/vbdev_lvol.o 00:04:22.606 LIB libspdk_scheduler_gscheduler.a 00:04:22.606 SO libspdk_fsdev_aio.so.1.0 00:04:22.606 SYMLINK libspdk_sock_posix.so 00:04:22.606 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:22.606 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:22.606 SO libspdk_scheduler_gscheduler.so.4.0 00:04:22.606 SYMLINK libspdk_fsdev_aio.so 00:04:22.606 SYMLINK libspdk_scheduler_gscheduler.so 00:04:22.606 CC module/bdev/gpt/vbdev_gpt.o 00:04:22.606 CC module/bdev/malloc/bdev_malloc.o 00:04:22.606 LIB libspdk_blobfs_bdev.a 00:04:22.606 CC module/bdev/null/bdev_null.o 00:04:22.606 LIB libspdk_bdev_delay.a 00:04:22.606 SO libspdk_blobfs_bdev.so.6.0 00:04:22.606 LIB libspdk_bdev_error.a 00:04:22.606 SO libspdk_bdev_delay.so.6.0 00:04:22.870 SO libspdk_bdev_error.so.6.0 00:04:22.870 CC module/bdev/passthru/vbdev_passthru.o 00:04:22.870 CC module/bdev/nvme/bdev_nvme.o 00:04:22.870 SYMLINK libspdk_blobfs_bdev.so 00:04:22.870 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:22.870 SYMLINK libspdk_bdev_error.so 00:04:22.870 SYMLINK libspdk_bdev_delay.so 00:04:22.870 CC module/bdev/nvme/nvme_rpc.o 00:04:22.870 CC module/bdev/nvme/bdev_mdns_client.o 00:04:22.870 LIB libspdk_bdev_gpt.a 00:04:22.870 SO libspdk_bdev_gpt.so.6.0 00:04:22.870 CC module/bdev/null/bdev_null_rpc.o 00:04:22.870 SYMLINK libspdk_bdev_gpt.so 00:04:22.870 CC module/bdev/nvme/vbdev_opal.o 00:04:22.870 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:22.870 LIB libspdk_bdev_lvol.a 00:04:23.129 SO libspdk_bdev_lvol.so.6.0 00:04:23.129 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:23.129 LIB libspdk_bdev_null.a 00:04:23.129 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:23.129 SO libspdk_bdev_null.so.6.0 00:04:23.129 CC module/bdev/raid/bdev_raid.o 00:04:23.129 SYMLINK libspdk_bdev_lvol.so 00:04:23.129 CC module/bdev/raid/bdev_raid_rpc.o 00:04:23.129 SYMLINK libspdk_bdev_null.so 00:04:23.129 CC module/bdev/raid/bdev_raid_sb.o 00:04:23.129 CC module/bdev/split/vbdev_split.o 00:04:23.129 CC module/bdev/split/vbdev_split_rpc.o 00:04:23.129 LIB libspdk_bdev_malloc.a 00:04:23.129 LIB libspdk_bdev_passthru.a 00:04:23.129 SO libspdk_bdev_malloc.so.6.0 00:04:23.129 SO libspdk_bdev_passthru.so.6.0 00:04:23.129 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:23.129 SYMLINK libspdk_bdev_malloc.so 00:04:23.129 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:23.129 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:23.129 SYMLINK libspdk_bdev_passthru.so 00:04:23.387 CC module/bdev/raid/raid0.o 00:04:23.387 CC module/bdev/raid/raid1.o 00:04:23.387 LIB libspdk_bdev_split.a 00:04:23.387 SO libspdk_bdev_split.so.6.0 00:04:23.387 CC module/bdev/raid/concat.o 00:04:23.387 CC module/bdev/xnvme/bdev_xnvme.o 00:04:23.387 SYMLINK libspdk_bdev_split.so 00:04:23.387 CC module/bdev/aio/bdev_aio.o 00:04:23.645 LIB libspdk_bdev_zone_block.a 00:04:23.645 CC module/bdev/aio/bdev_aio_rpc.o 00:04:23.645 SO libspdk_bdev_zone_block.so.6.0 00:04:23.645 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:23.645 CC module/bdev/ftl/bdev_ftl.o 00:04:23.645 CC module/bdev/iscsi/bdev_iscsi.o 00:04:23.645 SYMLINK libspdk_bdev_zone_block.so 00:04:23.645 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:23.645 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:23.645 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:23.645 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:23.645 LIB libspdk_bdev_xnvme.a 00:04:23.645 SO libspdk_bdev_xnvme.so.3.0 00:04:23.903 LIB libspdk_bdev_aio.a 00:04:23.903 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:23.903 SO libspdk_bdev_aio.so.6.0 00:04:23.903 SYMLINK libspdk_bdev_xnvme.so 00:04:23.903 LIB libspdk_bdev_ftl.a 00:04:23.903 SO libspdk_bdev_ftl.so.6.0 00:04:23.903 SYMLINK libspdk_bdev_aio.so 00:04:23.903 SYMLINK libspdk_bdev_ftl.so 00:04:23.903 LIB libspdk_bdev_iscsi.a 00:04:23.903 SO libspdk_bdev_iscsi.so.6.0 00:04:24.161 SYMLINK libspdk_bdev_iscsi.so 00:04:24.161 LIB libspdk_bdev_raid.a 00:04:24.161 SO libspdk_bdev_raid.so.6.0 00:04:24.161 LIB libspdk_bdev_virtio.a 00:04:24.161 SO libspdk_bdev_virtio.so.6.0 00:04:24.161 SYMLINK libspdk_bdev_raid.so 00:04:24.161 SYMLINK libspdk_bdev_virtio.so 00:04:25.095 LIB libspdk_bdev_nvme.a 00:04:25.095 SO libspdk_bdev_nvme.so.7.1 00:04:25.095 SYMLINK libspdk_bdev_nvme.so 00:04:25.660 CC module/event/subsystems/fsdev/fsdev.o 00:04:25.660 CC module/event/subsystems/scheduler/scheduler.o 00:04:25.660 CC module/event/subsystems/iobuf/iobuf.o 00:04:25.660 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:25.660 CC module/event/subsystems/keyring/keyring.o 00:04:25.660 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:25.660 CC module/event/subsystems/sock/sock.o 00:04:25.660 CC module/event/subsystems/vmd/vmd.o 00:04:25.660 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:25.660 LIB libspdk_event_scheduler.a 00:04:25.660 LIB libspdk_event_fsdev.a 00:04:25.660 LIB libspdk_event_sock.a 00:04:25.660 LIB libspdk_event_vhost_blk.a 00:04:25.660 SO libspdk_event_scheduler.so.4.0 00:04:25.660 LIB libspdk_event_iobuf.a 00:04:25.660 SO libspdk_event_sock.so.5.0 00:04:25.660 SO libspdk_event_fsdev.so.1.0 00:04:25.660 LIB libspdk_event_vmd.a 00:04:25.660 LIB libspdk_event_keyring.a 00:04:25.660 SO libspdk_event_vhost_blk.so.3.0 00:04:25.660 SO libspdk_event_iobuf.so.3.0 00:04:25.660 SO libspdk_event_keyring.so.1.0 00:04:25.660 SO libspdk_event_vmd.so.6.0 00:04:25.660 SYMLINK libspdk_event_fsdev.so 00:04:25.660 SYMLINK libspdk_event_sock.so 00:04:25.660 SYMLINK libspdk_event_scheduler.so 00:04:25.660 SYMLINK libspdk_event_vhost_blk.so 00:04:25.660 SYMLINK libspdk_event_keyring.so 00:04:25.660 SYMLINK libspdk_event_iobuf.so 00:04:25.660 SYMLINK libspdk_event_vmd.so 00:04:25.918 CC module/event/subsystems/accel/accel.o 00:04:26.176 LIB libspdk_event_accel.a 00:04:26.176 SO libspdk_event_accel.so.6.0 00:04:26.176 SYMLINK libspdk_event_accel.so 00:04:26.436 CC module/event/subsystems/bdev/bdev.o 00:04:26.436 LIB libspdk_event_bdev.a 00:04:26.693 SO libspdk_event_bdev.so.6.0 00:04:26.693 SYMLINK libspdk_event_bdev.so 00:04:26.693 CC module/event/subsystems/scsi/scsi.o 00:04:26.693 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:26.693 CC module/event/subsystems/ublk/ublk.o 00:04:26.693 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:26.693 CC module/event/subsystems/nbd/nbd.o 00:04:26.950 LIB libspdk_event_ublk.a 00:04:26.950 SO libspdk_event_ublk.so.3.0 00:04:26.950 LIB libspdk_event_nbd.a 00:04:26.950 LIB libspdk_event_scsi.a 00:04:26.950 SO libspdk_event_nbd.so.6.0 00:04:26.950 SYMLINK libspdk_event_ublk.so 00:04:26.950 SO libspdk_event_scsi.so.6.0 00:04:26.950 SYMLINK libspdk_event_nbd.so 00:04:26.950 LIB libspdk_event_nvmf.a 00:04:26.950 SYMLINK libspdk_event_scsi.so 00:04:26.950 SO libspdk_event_nvmf.so.6.0 00:04:26.950 SYMLINK libspdk_event_nvmf.so 00:04:27.207 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:27.207 CC module/event/subsystems/iscsi/iscsi.o 00:04:27.207 LIB libspdk_event_vhost_scsi.a 00:04:27.207 SO libspdk_event_vhost_scsi.so.3.0 00:04:27.207 LIB libspdk_event_iscsi.a 00:04:27.465 SO libspdk_event_iscsi.so.6.0 00:04:27.465 SYMLINK libspdk_event_vhost_scsi.so 00:04:27.465 SYMLINK libspdk_event_iscsi.so 00:04:27.465 SO libspdk.so.6.0 00:04:27.465 SYMLINK libspdk.so 00:04:27.723 TEST_HEADER include/spdk/accel.h 00:04:27.723 TEST_HEADER include/spdk/accel_module.h 00:04:27.723 TEST_HEADER include/spdk/assert.h 00:04:27.723 TEST_HEADER include/spdk/barrier.h 00:04:27.723 CXX app/trace/trace.o 00:04:27.723 TEST_HEADER include/spdk/base64.h 00:04:27.723 TEST_HEADER include/spdk/bdev.h 00:04:27.723 TEST_HEADER include/spdk/bdev_module.h 00:04:27.723 CC app/trace_record/trace_record.o 00:04:27.723 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.723 TEST_HEADER include/spdk/bit_array.h 00:04:27.723 TEST_HEADER include/spdk/bit_pool.h 00:04:27.723 CC test/rpc_client/rpc_client_test.o 00:04:27.723 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.723 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.723 TEST_HEADER include/spdk/blobfs.h 00:04:27.723 TEST_HEADER include/spdk/blob.h 00:04:27.723 TEST_HEADER include/spdk/conf.h 00:04:27.723 TEST_HEADER include/spdk/config.h 00:04:27.723 TEST_HEADER include/spdk/cpuset.h 00:04:27.723 TEST_HEADER include/spdk/crc16.h 00:04:27.723 TEST_HEADER include/spdk/crc32.h 00:04:27.723 TEST_HEADER include/spdk/crc64.h 00:04:27.723 TEST_HEADER include/spdk/dif.h 00:04:27.723 TEST_HEADER include/spdk/dma.h 00:04:27.723 TEST_HEADER include/spdk/endian.h 00:04:27.723 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.723 TEST_HEADER include/spdk/env.h 00:04:27.723 TEST_HEADER include/spdk/event.h 00:04:27.723 TEST_HEADER include/spdk/fd_group.h 00:04:27.723 TEST_HEADER include/spdk/fd.h 00:04:27.723 TEST_HEADER include/spdk/file.h 00:04:27.723 CC app/nvmf_tgt/nvmf_main.o 00:04:27.723 TEST_HEADER include/spdk/fsdev.h 00:04:27.723 TEST_HEADER include/spdk/fsdev_module.h 00:04:27.723 TEST_HEADER include/spdk/ftl.h 00:04:27.723 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.723 TEST_HEADER include/spdk/hexlify.h 00:04:27.723 TEST_HEADER include/spdk/histogram_data.h 00:04:27.723 TEST_HEADER include/spdk/idxd.h 00:04:27.723 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.723 TEST_HEADER include/spdk/init.h 00:04:27.723 TEST_HEADER include/spdk/ioat.h 00:04:27.723 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.723 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.723 TEST_HEADER include/spdk/json.h 00:04:27.724 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.724 TEST_HEADER include/spdk/keyring.h 00:04:27.724 TEST_HEADER include/spdk/keyring_module.h 00:04:27.724 CC test/thread/poller_perf/poller_perf.o 00:04:27.724 TEST_HEADER include/spdk/likely.h 00:04:27.724 CC examples/util/zipf/zipf.o 00:04:27.724 TEST_HEADER include/spdk/log.h 00:04:27.724 TEST_HEADER include/spdk/lvol.h 00:04:27.724 TEST_HEADER include/spdk/md5.h 00:04:27.724 TEST_HEADER include/spdk/memory.h 00:04:27.724 TEST_HEADER include/spdk/mmio.h 00:04:27.724 TEST_HEADER include/spdk/nbd.h 00:04:27.724 TEST_HEADER include/spdk/net.h 00:04:27.724 TEST_HEADER include/spdk/notify.h 00:04:27.724 TEST_HEADER include/spdk/nvme.h 00:04:27.724 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.724 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.724 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.724 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.724 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.724 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.724 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.724 TEST_HEADER include/spdk/nvmf.h 00:04:27.724 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.724 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.724 CC test/app/bdev_svc/bdev_svc.o 00:04:27.724 TEST_HEADER include/spdk/opal.h 00:04:27.724 TEST_HEADER include/spdk/opal_spec.h 00:04:27.724 TEST_HEADER include/spdk/pci_ids.h 00:04:27.724 TEST_HEADER include/spdk/pipe.h 00:04:27.724 TEST_HEADER include/spdk/queue.h 00:04:27.724 TEST_HEADER include/spdk/reduce.h 00:04:27.724 TEST_HEADER include/spdk/rpc.h 00:04:27.724 TEST_HEADER include/spdk/scheduler.h 00:04:27.724 TEST_HEADER include/spdk/scsi.h 00:04:27.724 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.724 TEST_HEADER include/spdk/sock.h 00:04:27.724 TEST_HEADER include/spdk/stdinc.h 00:04:27.724 TEST_HEADER include/spdk/string.h 00:04:27.724 TEST_HEADER include/spdk/thread.h 00:04:27.724 TEST_HEADER include/spdk/trace.h 00:04:27.724 CC test/dma/test_dma/test_dma.o 00:04:27.724 TEST_HEADER include/spdk/trace_parser.h 00:04:27.724 TEST_HEADER include/spdk/tree.h 00:04:27.981 TEST_HEADER include/spdk/ublk.h 00:04:27.981 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.981 LINK rpc_client_test 00:04:27.981 TEST_HEADER include/spdk/util.h 00:04:27.981 TEST_HEADER include/spdk/uuid.h 00:04:27.981 TEST_HEADER include/spdk/version.h 00:04:27.981 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.981 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.981 TEST_HEADER include/spdk/vhost.h 00:04:27.981 TEST_HEADER include/spdk/vmd.h 00:04:27.981 TEST_HEADER include/spdk/xor.h 00:04:27.981 TEST_HEADER include/spdk/zipf.h 00:04:27.981 CXX test/cpp_headers/accel.o 00:04:27.981 LINK poller_perf 00:04:27.981 LINK nvmf_tgt 00:04:27.981 LINK spdk_trace_record 00:04:27.981 CXX test/cpp_headers/accel_module.o 00:04:27.981 LINK zipf 00:04:27.981 LINK mem_callbacks 00:04:27.981 CXX test/cpp_headers/assert.o 00:04:27.981 LINK bdev_svc 00:04:27.981 CXX test/cpp_headers/barrier.o 00:04:27.981 CXX test/cpp_headers/base64.o 00:04:27.981 LINK spdk_trace 00:04:27.981 CXX test/cpp_headers/bdev.o 00:04:28.239 CC test/env/vtophys/vtophys.o 00:04:28.239 CC app/iscsi_tgt/iscsi_tgt.o 00:04:28.239 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.239 CC examples/ioat/perf/perf.o 00:04:28.239 CC examples/ioat/verify/verify.o 00:04:28.239 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:28.239 LINK vtophys 00:04:28.239 CXX test/cpp_headers/bdev_module.o 00:04:28.239 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:28.239 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:28.239 LINK test_dma 00:04:28.239 LINK env_dpdk_post_init 00:04:28.239 LINK iscsi_tgt 00:04:28.239 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:28.497 LINK ioat_perf 00:04:28.497 LINK verify 00:04:28.497 CXX test/cpp_headers/bdev_zone.o 00:04:28.497 CC test/app/histogram_perf/histogram_perf.o 00:04:28.497 CXX test/cpp_headers/bit_array.o 00:04:28.497 CC test/env/memory/memory_ut.o 00:04:28.497 LINK histogram_perf 00:04:28.497 CC test/app/jsoncat/jsoncat.o 00:04:28.497 CC app/spdk_tgt/spdk_tgt.o 00:04:28.497 CXX test/cpp_headers/bit_pool.o 00:04:28.755 LINK nvme_fuzz 00:04:28.755 CC examples/vmd/lsvmd/lsvmd.o 00:04:28.755 CC test/env/pci/pci_ut.o 00:04:28.755 LINK jsoncat 00:04:28.755 CC examples/vmd/led/led.o 00:04:28.755 LINK lsvmd 00:04:28.755 CXX test/cpp_headers/blob_bdev.o 00:04:28.755 LINK vhost_fuzz 00:04:28.755 LINK spdk_tgt 00:04:28.755 LINK led 00:04:28.755 CC app/spdk_lspci/spdk_lspci.o 00:04:29.013 CXX test/cpp_headers/blobfs_bdev.o 00:04:29.013 CC app/spdk_nvme_perf/perf.o 00:04:29.013 CC examples/idxd/perf/perf.o 00:04:29.013 LINK pci_ut 00:04:29.013 CC test/event/event_perf/event_perf.o 00:04:29.013 LINK spdk_lspci 00:04:29.013 CC test/nvme/aer/aer.o 00:04:29.013 CXX test/cpp_headers/blobfs.o 00:04:29.013 LINK event_perf 00:04:29.270 CC test/accel/dif/dif.o 00:04:29.270 CXX test/cpp_headers/blob.o 00:04:29.270 CC test/nvme/reset/reset.o 00:04:29.270 CC test/nvme/sgl/sgl.o 00:04:29.270 LINK idxd_perf 00:04:29.270 LINK memory_ut 00:04:29.270 CC test/event/reactor/reactor.o 00:04:29.270 LINK aer 00:04:29.270 CXX test/cpp_headers/conf.o 00:04:29.528 LINK reactor 00:04:29.528 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:29.528 CC test/nvme/e2edp/nvme_dp.o 00:04:29.528 LINK reset 00:04:29.528 CXX test/cpp_headers/config.o 00:04:29.528 LINK sgl 00:04:29.528 CXX test/cpp_headers/cpuset.o 00:04:29.528 LINK iscsi_fuzz 00:04:29.528 LINK spdk_nvme_perf 00:04:29.528 CXX test/cpp_headers/crc16.o 00:04:29.528 CC test/event/reactor_perf/reactor_perf.o 00:04:29.528 LINK interrupt_tgt 00:04:29.528 CC test/blobfs/mkfs/mkfs.o 00:04:29.785 CC test/event/app_repeat/app_repeat.o 00:04:29.785 LINK dif 00:04:29.785 CC test/event/scheduler/scheduler.o 00:04:29.785 CXX test/cpp_headers/crc32.o 00:04:29.785 LINK nvme_dp 00:04:29.785 LINK reactor_perf 00:04:29.785 LINK mkfs 00:04:29.785 CC app/spdk_nvme_identify/identify.o 00:04:29.785 LINK app_repeat 00:04:29.785 CC test/app/stub/stub.o 00:04:29.785 CXX test/cpp_headers/crc64.o 00:04:29.785 CC examples/thread/thread/thread_ex.o 00:04:29.785 CXX test/cpp_headers/dif.o 00:04:30.043 LINK scheduler 00:04:30.043 CC app/spdk_nvme_discover/discovery_aer.o 00:04:30.043 CC app/spdk_top/spdk_top.o 00:04:30.043 CC test/nvme/overhead/overhead.o 00:04:30.043 LINK stub 00:04:30.043 CXX test/cpp_headers/dma.o 00:04:30.043 CXX test/cpp_headers/endian.o 00:04:30.043 LINK thread 00:04:30.043 LINK spdk_nvme_discover 00:04:30.043 CC test/lvol/esnap/esnap.o 00:04:30.043 CXX test/cpp_headers/env_dpdk.o 00:04:30.043 CC examples/sock/hello_world/hello_sock.o 00:04:30.043 LINK overhead 00:04:30.301 CXX test/cpp_headers/env.o 00:04:30.301 CXX test/cpp_headers/event.o 00:04:30.301 CC test/bdev/bdevio/bdevio.o 00:04:30.301 CXX test/cpp_headers/fd_group.o 00:04:30.301 CXX test/cpp_headers/fd.o 00:04:30.301 CXX test/cpp_headers/file.o 00:04:30.301 LINK hello_sock 00:04:30.301 CC test/nvme/err_injection/err_injection.o 00:04:30.558 CC app/vhost/vhost.o 00:04:30.558 CXX test/cpp_headers/fsdev.o 00:04:30.558 LINK err_injection 00:04:30.558 CC test/nvme/startup/startup.o 00:04:30.558 LINK spdk_nvme_identify 00:04:30.558 CC examples/accel/perf/accel_perf.o 00:04:30.558 CXX test/cpp_headers/fsdev_module.o 00:04:30.558 CC examples/blob/hello_world/hello_blob.o 00:04:30.558 LINK bdevio 00:04:30.558 LINK vhost 00:04:30.558 CXX test/cpp_headers/ftl.o 00:04:30.816 CC test/nvme/reserve/reserve.o 00:04:30.816 LINK startup 00:04:30.816 CXX test/cpp_headers/gpt_spec.o 00:04:30.816 LINK hello_blob 00:04:30.816 LINK reserve 00:04:30.816 CC examples/nvme/hello_world/hello_world.o 00:04:30.816 CC app/spdk_dd/spdk_dd.o 00:04:30.816 CXX test/cpp_headers/hexlify.o 00:04:30.816 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:31.073 CC app/fio/nvme/fio_plugin.o 00:04:31.073 CC examples/blob/cli/blobcli.o 00:04:31.073 LINK accel_perf 00:04:31.073 CXX test/cpp_headers/histogram_data.o 00:04:31.073 CC test/nvme/simple_copy/simple_copy.o 00:04:31.073 LINK hello_world 00:04:31.073 LINK spdk_top 00:04:31.073 LINK hello_fsdev 00:04:31.073 LINK spdk_dd 00:04:31.073 CXX test/cpp_headers/idxd.o 00:04:31.330 CXX test/cpp_headers/idxd_spec.o 00:04:31.330 LINK simple_copy 00:04:31.330 CC examples/nvme/reconnect/reconnect.o 00:04:31.330 CC app/fio/bdev/fio_plugin.o 00:04:31.330 CXX test/cpp_headers/init.o 00:04:31.330 CC test/nvme/connect_stress/connect_stress.o 00:04:31.330 CC test/nvme/boot_partition/boot_partition.o 00:04:31.587 CXX test/cpp_headers/ioat.o 00:04:31.587 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.587 LINK blobcli 00:04:31.587 LINK boot_partition 00:04:31.587 CC examples/bdev/bdevperf/bdevperf.o 00:04:31.587 LINK spdk_nvme 00:04:31.587 LINK reconnect 00:04:31.587 LINK connect_stress 00:04:31.587 CXX test/cpp_headers/ioat_spec.o 00:04:31.587 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:31.587 CC test/nvme/fused_ordering/fused_ordering.o 00:04:31.587 LINK hello_bdev 00:04:31.587 CC test/nvme/compliance/nvme_compliance.o 00:04:31.587 CXX test/cpp_headers/iscsi_spec.o 00:04:31.587 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:31.844 CC test/nvme/fdp/fdp.o 00:04:31.844 LINK spdk_bdev 00:04:31.844 CXX test/cpp_headers/json.o 00:04:31.844 CXX test/cpp_headers/jsonrpc.o 00:04:31.844 LINK doorbell_aers 00:04:31.844 CXX test/cpp_headers/keyring.o 00:04:31.844 LINK fused_ordering 00:04:31.844 LINK nvme_compliance 00:04:31.844 CXX test/cpp_headers/keyring_module.o 00:04:32.102 CXX test/cpp_headers/likely.o 00:04:32.102 CC test/nvme/cuse/cuse.o 00:04:32.102 CC examples/nvme/arbitration/arbitration.o 00:04:32.102 CC examples/nvme/hotplug/hotplug.o 00:04:32.102 LINK fdp 00:04:32.102 CXX test/cpp_headers/log.o 00:04:32.102 CXX test/cpp_headers/lvol.o 00:04:32.102 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:32.102 LINK nvme_manage 00:04:32.102 CXX test/cpp_headers/md5.o 00:04:32.102 CXX test/cpp_headers/memory.o 00:04:32.359 CXX test/cpp_headers/mmio.o 00:04:32.359 LINK hotplug 00:04:32.359 LINK cmb_copy 00:04:32.359 LINK bdevperf 00:04:32.359 CC examples/nvme/abort/abort.o 00:04:32.359 CXX test/cpp_headers/nbd.o 00:04:32.359 LINK arbitration 00:04:32.359 CXX test/cpp_headers/net.o 00:04:32.359 CXX test/cpp_headers/notify.o 00:04:32.359 CXX test/cpp_headers/nvme.o 00:04:32.359 CXX test/cpp_headers/nvme_intel.o 00:04:32.359 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:32.359 CXX test/cpp_headers/nvme_ocssd.o 00:04:32.616 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:32.616 CXX test/cpp_headers/nvme_spec.o 00:04:32.616 CXX test/cpp_headers/nvme_zns.o 00:04:32.616 CXX test/cpp_headers/nvmf_cmd.o 00:04:32.616 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:32.616 LINK pmr_persistence 00:04:32.616 CXX test/cpp_headers/nvmf.o 00:04:32.616 LINK abort 00:04:32.616 CXX test/cpp_headers/nvmf_spec.o 00:04:32.616 CXX test/cpp_headers/nvmf_transport.o 00:04:32.616 CXX test/cpp_headers/opal.o 00:04:32.616 CXX test/cpp_headers/opal_spec.o 00:04:32.616 CXX test/cpp_headers/pci_ids.o 00:04:32.616 CXX test/cpp_headers/pipe.o 00:04:32.616 CXX test/cpp_headers/queue.o 00:04:32.873 CXX test/cpp_headers/reduce.o 00:04:32.873 CXX test/cpp_headers/rpc.o 00:04:32.873 CXX test/cpp_headers/scheduler.o 00:04:32.873 CXX test/cpp_headers/scsi.o 00:04:32.873 CXX test/cpp_headers/scsi_spec.o 00:04:32.873 CXX test/cpp_headers/sock.o 00:04:32.873 CXX test/cpp_headers/stdinc.o 00:04:32.873 CXX test/cpp_headers/string.o 00:04:32.873 CC examples/nvmf/nvmf/nvmf.o 00:04:32.873 CXX test/cpp_headers/thread.o 00:04:32.873 CXX test/cpp_headers/trace.o 00:04:32.873 CXX test/cpp_headers/trace_parser.o 00:04:32.873 CXX test/cpp_headers/tree.o 00:04:32.873 CXX test/cpp_headers/ublk.o 00:04:32.873 CXX test/cpp_headers/util.o 00:04:32.874 CXX test/cpp_headers/uuid.o 00:04:32.874 CXX test/cpp_headers/version.o 00:04:33.131 CXX test/cpp_headers/vfio_user_pci.o 00:04:33.131 LINK cuse 00:04:33.131 CXX test/cpp_headers/vfio_user_spec.o 00:04:33.131 CXX test/cpp_headers/vhost.o 00:04:33.131 CXX test/cpp_headers/xor.o 00:04:33.131 CXX test/cpp_headers/vmd.o 00:04:33.131 CXX test/cpp_headers/zipf.o 00:04:33.131 LINK nvmf 00:04:35.668 LINK esnap 00:04:35.929 ************************************ 00:04:35.929 END TEST make 00:04:35.929 ************************************ 00:04:35.929 00:04:35.929 real 1m3.610s 00:04:35.929 user 5m7.804s 00:04:35.929 sys 0m51.235s 00:04:35.929 17:59:10 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:35.929 17:59:10 make -- common/autotest_common.sh@10 -- $ set +x 00:04:35.929 17:59:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:35.929 17:59:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:35.929 17:59:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:35.929 17:59:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.929 17:59:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:35.929 17:59:10 -- pm/common@44 -- $ pid=5819 00:04:35.929 17:59:10 -- pm/common@50 -- $ kill -TERM 5819 00:04:35.929 17:59:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.929 17:59:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:35.929 17:59:10 -- pm/common@44 -- $ pid=5821 00:04:35.929 17:59:10 -- pm/common@50 -- $ kill -TERM 5821 00:04:35.929 17:59:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:35.929 17:59:10 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.929 17:59:10 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.929 17:59:10 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.929 17:59:10 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.929 17:59:10 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.929 17:59:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.929 17:59:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.929 17:59:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.929 17:59:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.929 17:59:10 -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.929 17:59:10 -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.929 17:59:10 -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.929 17:59:10 -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.929 17:59:10 -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.929 17:59:10 -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.929 17:59:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.929 17:59:10 -- scripts/common.sh@344 -- # case "$op" in 00:04:35.929 17:59:10 -- scripts/common.sh@345 -- # : 1 00:04:35.929 17:59:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.929 17:59:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.929 17:59:10 -- scripts/common.sh@365 -- # decimal 1 00:04:35.929 17:59:10 -- scripts/common.sh@353 -- # local d=1 00:04:35.929 17:59:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.929 17:59:10 -- scripts/common.sh@355 -- # echo 1 00:04:35.929 17:59:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.929 17:59:10 -- scripts/common.sh@366 -- # decimal 2 00:04:35.929 17:59:10 -- scripts/common.sh@353 -- # local d=2 00:04:35.929 17:59:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.929 17:59:10 -- scripts/common.sh@355 -- # echo 2 00:04:35.929 17:59:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.929 17:59:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.929 17:59:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.929 17:59:10 -- scripts/common.sh@368 -- # return 0 00:04:35.929 17:59:10 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.929 17:59:10 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.929 --rc genhtml_branch_coverage=1 00:04:35.929 --rc genhtml_function_coverage=1 00:04:35.929 --rc genhtml_legend=1 00:04:35.929 --rc geninfo_all_blocks=1 00:04:35.929 --rc geninfo_unexecuted_blocks=1 00:04:35.929 00:04:35.929 ' 00:04:35.929 17:59:10 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.929 --rc genhtml_branch_coverage=1 00:04:35.929 --rc genhtml_function_coverage=1 00:04:35.929 --rc genhtml_legend=1 00:04:35.929 --rc geninfo_all_blocks=1 00:04:35.929 --rc geninfo_unexecuted_blocks=1 00:04:35.929 00:04:35.929 ' 00:04:35.929 17:59:10 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.929 --rc genhtml_branch_coverage=1 00:04:35.929 --rc genhtml_function_coverage=1 00:04:35.929 --rc genhtml_legend=1 00:04:35.929 --rc geninfo_all_blocks=1 00:04:35.929 --rc geninfo_unexecuted_blocks=1 00:04:35.929 00:04:35.929 ' 00:04:35.929 17:59:10 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.929 --rc genhtml_branch_coverage=1 00:04:35.929 --rc genhtml_function_coverage=1 00:04:35.929 --rc genhtml_legend=1 00:04:35.929 --rc geninfo_all_blocks=1 00:04:35.929 --rc geninfo_unexecuted_blocks=1 00:04:35.929 00:04:35.929 ' 00:04:35.929 17:59:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.929 17:59:10 -- nvmf/common.sh@7 -- # uname -s 00:04:35.929 17:59:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.929 17:59:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.929 17:59:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.929 17:59:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.929 17:59:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.929 17:59:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.929 17:59:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.929 17:59:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.929 17:59:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.929 17:59:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.929 17:59:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b5ed997-18b8-4232-b3a4-124f0355258f 00:04:35.929 17:59:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=0b5ed997-18b8-4232-b3a4-124f0355258f 00:04:35.930 17:59:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.930 17:59:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.930 17:59:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.930 17:59:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.930 17:59:10 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.930 17:59:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.930 17:59:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.930 17:59:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.930 17:59:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.930 17:59:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.930 17:59:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.930 17:59:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.930 17:59:10 -- paths/export.sh@5 -- # export PATH 00:04:35.930 17:59:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.930 17:59:10 -- nvmf/common.sh@51 -- # : 0 00:04:35.930 17:59:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.930 17:59:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.930 17:59:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.930 17:59:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.930 17:59:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.930 17:59:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.930 17:59:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.930 17:59:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.930 17:59:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.930 17:59:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:35.930 17:59:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:35.930 17:59:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:35.930 17:59:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:35.930 17:59:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.930 17:59:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:35.930 17:59:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.930 17:59:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:36.189 17:59:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:36.189 17:59:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:36.189 17:59:10 -- spdk/autotest.sh@48 -- # udevadm_pid=68056 00:04:36.189 17:59:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:36.189 17:59:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:36.190 17:59:10 -- pm/common@17 -- # local monitor 00:04:36.190 17:59:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.190 17:59:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.190 17:59:10 -- pm/common@25 -- # sleep 1 00:04:36.190 17:59:10 -- pm/common@21 -- # date +%s 00:04:36.190 17:59:10 -- pm/common@21 -- # date +%s 00:04:36.190 17:59:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734112750 00:04:36.190 17:59:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734112750 00:04:36.190 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734112750_collect-cpu-load.pm.log 00:04:36.190 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734112750_collect-vmstat.pm.log 00:04:37.123 17:59:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:37.123 17:59:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:37.123 17:59:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.123 17:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.123 17:59:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:37.123 17:59:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:37.123 17:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.123 17:59:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:37.123 17:59:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:37.123 17:59:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:37.123 17:59:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:37.123 17:59:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:37.123 17:59:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:37.123 17:59:11 -- common/autotest_common.sh@1457 -- # uname 00:04:37.123 17:59:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:37.123 17:59:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:37.123 17:59:11 -- common/autotest_common.sh@1477 -- # uname 00:04:37.123 17:59:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:37.123 17:59:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:37.123 17:59:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:37.123 lcov: LCOV version 1.15 00:04:37.123 17:59:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:51.998 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:51.998 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:10.122 17:59:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:10.122 17:59:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.122 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 17:59:41 -- spdk/autotest.sh@78 -- # rm -f 00:05:10.122 17:59:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.122 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:10.122 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:10.122 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:10.122 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:10.122 17:59:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:10.122 17:59:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:10.122 17:59:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:10.122 17:59:43 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:10.122 17:59:43 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:10.122 17:59:43 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:10.122 17:59:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:10.122 17:59:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:10.122 17:59:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:10.122 17:59:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:10.122 17:59:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:10.122 17:59:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:10.122 17:59:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:10.122 17:59:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:10.122 17:59:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:10.122 17:59:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:10.122 17:59:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:10.122 17:59:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.122 17:59:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:10.122 17:59:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.122 17:59:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.122 17:59:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:10.122 17:59:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:10.122 17:59:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:10.122 No valid GPT data, bailing 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # pt= 00:05:10.122 17:59:43 -- scripts/common.sh@395 -- # return 1 00:05:10.122 17:59:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:10.122 1+0 records in 00:05:10.122 1+0 records out 00:05:10.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034571 s, 30.3 MB/s 00:05:10.122 17:59:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.122 17:59:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.122 17:59:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:10.122 17:59:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:10.122 17:59:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:10.122 No valid GPT data, bailing 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # pt= 00:05:10.122 17:59:43 -- scripts/common.sh@395 -- # return 1 00:05:10.122 17:59:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:10.122 1+0 records in 00:05:10.122 1+0 records out 00:05:10.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610143 s, 172 MB/s 00:05:10.122 17:59:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.122 17:59:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.122 17:59:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:10.122 17:59:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:10.122 17:59:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:10.122 No valid GPT data, bailing 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # pt= 00:05:10.122 17:59:43 -- scripts/common.sh@395 -- # return 1 00:05:10.122 17:59:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:10.122 1+0 records in 00:05:10.122 1+0 records out 00:05:10.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00597016 s, 176 MB/s 00:05:10.122 17:59:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.122 17:59:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.122 17:59:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:10.122 17:59:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:10.122 17:59:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:10.122 No valid GPT data, bailing 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # pt= 00:05:10.122 17:59:43 -- scripts/common.sh@395 -- # return 1 00:05:10.122 17:59:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:10.122 1+0 records in 00:05:10.122 1+0 records out 00:05:10.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483468 s, 217 MB/s 00:05:10.122 17:59:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.122 17:59:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.122 17:59:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:10.122 17:59:43 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:10.122 17:59:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:10.122 No valid GPT data, bailing 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:10.122 17:59:43 -- scripts/common.sh@394 -- # pt= 00:05:10.122 17:59:43 -- scripts/common.sh@395 -- # return 1 00:05:10.122 17:59:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:10.122 1+0 records in 00:05:10.122 1+0 records out 00:05:10.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458749 s, 229 MB/s 00:05:10.123 17:59:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.123 17:59:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.123 17:59:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:10.123 17:59:43 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:10.123 17:59:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:10.123 No valid GPT data, bailing 00:05:10.123 17:59:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:10.123 17:59:43 -- scripts/common.sh@394 -- # pt= 00:05:10.123 17:59:43 -- scripts/common.sh@395 -- # return 1 00:05:10.123 17:59:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:10.123 1+0 records in 00:05:10.123 1+0 records out 00:05:10.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505342 s, 207 MB/s 00:05:10.123 17:59:43 -- spdk/autotest.sh@105 -- # sync 00:05:10.123 17:59:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:10.123 17:59:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:10.123 17:59:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:11.058 17:59:45 -- spdk/autotest.sh@111 -- # uname -s 00:05:11.058 17:59:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:11.058 17:59:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:11.058 17:59:45 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.888 Hugepages 00:05:11.888 node hugesize free / total 00:05:12.149 node0 1048576kB 0 / 0 00:05:12.149 node0 2048kB 0 / 0 00:05:12.149 00:05:12.149 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.149 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:12.149 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:12.149 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:12.410 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:12.410 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:12.410 17:59:46 -- spdk/autotest.sh@117 -- # uname -s 00:05:12.410 17:59:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:12.410 17:59:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:12.410 17:59:46 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.555 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.555 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.555 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.555 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.555 17:59:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:14.498 17:59:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:14.498 17:59:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:14.498 17:59:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.498 17:59:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:14.498 17:59:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:14.498 17:59:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:14.498 17:59:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.498 17:59:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:14.498 17:59:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:14.759 17:59:48 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:14.759 17:59:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:14.759 17:59:48 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.020 Waiting for block devices as requested 00:05:15.288 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.288 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.288 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.288 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.567 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:20.567 17:59:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.567 17:59:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:20.567 17:59:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.567 17:59:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.567 17:59:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.567 17:59:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1543 -- # continue 00:05:20.567 17:59:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.567 17:59:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.567 17:59:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.567 17:59:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1543 -- # continue 00:05:20.567 17:59:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.567 17:59:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.567 17:59:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.567 17:59:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1543 -- # continue 00:05:20.567 17:59:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.567 17:59:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.567 17:59:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.567 17:59:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.567 17:59:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.568 17:59:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.568 17:59:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:20.568 17:59:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.568 17:59:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.568 17:59:54 -- common/autotest_common.sh@1543 -- # continue 00:05:20.568 17:59:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:20.568 17:59:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.568 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:20.568 17:59:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:20.568 17:59:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.568 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:20.568 17:59:54 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.709 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.709 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.709 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.709 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.970 17:59:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:21.970 17:59:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.970 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:21.970 17:59:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:21.970 17:59:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:21.970 17:59:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.970 17:59:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:21.970 17:59:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:21.970 17:59:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:21.970 17:59:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:21.970 17:59:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:21.970 17:59:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:21.970 17:59:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:21.970 17:59:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.970 17:59:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:21.970 17:59:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:21.970 17:59:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:21.970 17:59:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:21.970 17:59:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.970 17:59:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.970 17:59:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.970 17:59:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.970 17:59:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.970 17:59:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.970 17:59:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:21.970 17:59:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.970 17:59:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.970 17:59:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:21.970 17:59:56 -- common/autotest_common.sh@1572 -- # return 0 00:05:21.970 17:59:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:21.970 17:59:56 -- common/autotest_common.sh@1580 -- # return 0 00:05:21.970 17:59:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:21.970 17:59:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:21.970 17:59:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.970 17:59:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.970 17:59:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:21.970 17:59:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.970 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:21.970 17:59:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:21.970 17:59:56 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:21.970 17:59:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.970 17:59:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.970 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:21.970 ************************************ 00:05:21.970 START TEST env 00:05:21.970 ************************************ 00:05:21.970 17:59:56 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:22.231 * Looking for test storage... 00:05:22.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:22.231 17:59:56 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.231 17:59:56 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.231 17:59:56 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.231 17:59:56 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.231 17:59:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.231 17:59:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.231 17:59:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.231 17:59:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.231 17:59:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.231 17:59:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.231 17:59:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.231 17:59:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.231 17:59:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.231 17:59:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.231 17:59:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.231 17:59:56 env -- scripts/common.sh@344 -- # case "$op" in 00:05:22.231 17:59:56 env -- scripts/common.sh@345 -- # : 1 00:05:22.231 17:59:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.231 17:59:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.231 17:59:56 env -- scripts/common.sh@365 -- # decimal 1 00:05:22.231 17:59:56 env -- scripts/common.sh@353 -- # local d=1 00:05:22.231 17:59:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.231 17:59:56 env -- scripts/common.sh@355 -- # echo 1 00:05:22.231 17:59:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.231 17:59:56 env -- scripts/common.sh@366 -- # decimal 2 00:05:22.231 17:59:56 env -- scripts/common.sh@353 -- # local d=2 00:05:22.231 17:59:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.231 17:59:56 env -- scripts/common.sh@355 -- # echo 2 00:05:22.231 17:59:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.231 17:59:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.232 17:59:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.232 17:59:56 env -- scripts/common.sh@368 -- # return 0 00:05:22.232 17:59:56 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.232 17:59:56 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.232 --rc genhtml_branch_coverage=1 00:05:22.232 --rc genhtml_function_coverage=1 00:05:22.232 --rc genhtml_legend=1 00:05:22.232 --rc geninfo_all_blocks=1 00:05:22.232 --rc geninfo_unexecuted_blocks=1 00:05:22.232 00:05:22.232 ' 00:05:22.232 17:59:56 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.232 --rc genhtml_branch_coverage=1 00:05:22.232 --rc genhtml_function_coverage=1 00:05:22.232 --rc genhtml_legend=1 00:05:22.232 --rc geninfo_all_blocks=1 00:05:22.232 --rc geninfo_unexecuted_blocks=1 00:05:22.232 00:05:22.232 ' 00:05:22.232 17:59:56 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.232 --rc genhtml_branch_coverage=1 00:05:22.232 --rc genhtml_function_coverage=1 00:05:22.232 --rc genhtml_legend=1 00:05:22.232 --rc geninfo_all_blocks=1 00:05:22.232 --rc geninfo_unexecuted_blocks=1 00:05:22.232 00:05:22.232 ' 00:05:22.232 17:59:56 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.232 --rc genhtml_branch_coverage=1 00:05:22.232 --rc genhtml_function_coverage=1 00:05:22.232 --rc genhtml_legend=1 00:05:22.232 --rc geninfo_all_blocks=1 00:05:22.232 --rc geninfo_unexecuted_blocks=1 00:05:22.232 00:05:22.232 ' 00:05:22.232 17:59:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:22.232 17:59:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.232 17:59:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.232 17:59:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.232 ************************************ 00:05:22.232 START TEST env_memory 00:05:22.232 ************************************ 00:05:22.232 17:59:56 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:22.232 00:05:22.232 00:05:22.232 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.232 http://cunit.sourceforge.net/ 00:05:22.232 00:05:22.232 00:05:22.232 Suite: memory 00:05:22.232 Test: alloc and free memory map ...[2024-12-13 17:59:56.503036] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:22.232 passed 00:05:22.232 Test: mem map translation ...[2024-12-13 17:59:56.551990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:22.232 [2024-12-13 17:59:56.552272] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:22.232 [2024-12-13 17:59:56.552350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:22.232 [2024-12-13 17:59:56.552625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.506 passed 00:05:22.506 Test: mem map registration ...[2024-12-13 17:59:56.631036] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:22.506 [2024-12-13 17:59:56.631318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:22.506 passed 00:05:22.506 Test: mem map adjacent registrations ...passed 00:05:22.506 00:05:22.506 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.506 suites 1 1 n/a 0 0 00:05:22.506 tests 4 4 4 0 0 00:05:22.506 asserts 152 152 152 0 n/a 00:05:22.506 00:05:22.506 Elapsed time = 0.251 seconds 00:05:22.506 00:05:22.506 real 0m0.292s 00:05:22.506 user 0m0.248s 00:05:22.506 sys 0m0.026s 00:05:22.506 ************************************ 00:05:22.506 END TEST env_memory 00:05:22.506 ************************************ 00:05:22.506 17:59:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.506 17:59:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.506 17:59:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.506 17:59:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.506 17:59:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.506 17:59:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.506 ************************************ 00:05:22.506 START TEST env_vtophys 00:05:22.506 ************************************ 00:05:22.506 17:59:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.506 EAL: lib.eal log level changed from notice to debug 00:05:22.506 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 1 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 2 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 3 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 4 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 5 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 6 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 7 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 8 as core 0 on socket 0 00:05:22.506 EAL: Detected lcore 9 as core 0 on socket 0 00:05:22.506 EAL: Maximum logical cores by configuration: 128 00:05:22.506 EAL: Detected CPU lcores: 10 00:05:22.506 EAL: Detected NUMA nodes: 1 00:05:22.506 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:22.506 EAL: Detected shared linkage of DPDK 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:22.506 EAL: Registered [vdev] bus. 00:05:22.506 EAL: bus.vdev log level changed from disabled to notice 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:22.506 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:22.506 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:22.506 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:22.506 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.506 EAL: No shared files mode enabled, IPC is disabled 00:05:22.506 EAL: Selected IOVA mode 'PA' 00:05:22.506 EAL: Probing VFIO support... 00:05:22.506 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.506 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:22.506 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.506 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.507 EAL: Setting up physically contiguous memory... 00:05:22.507 EAL: Setting maximum number of open files to 524288 00:05:22.507 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.507 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.507 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.507 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.507 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.507 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.507 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.507 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.507 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.507 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.507 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.507 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.507 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.507 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.507 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.507 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.507 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.507 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.507 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.507 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.507 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.507 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.507 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.507 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.507 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.507 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.507 EAL: Hugepages will be freed exactly as allocated. 00:05:22.507 EAL: No shared files mode enabled, IPC is disabled 00:05:22.507 EAL: No shared files mode enabled, IPC is disabled 00:05:22.776 EAL: TSC frequency is ~2600000 KHz 00:05:22.776 EAL: Main lcore 0 is ready (tid=7fc488e70a40;cpuset=[0]) 00:05:22.776 EAL: Trying to obtain current memory policy. 00:05:22.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.776 EAL: Restoring previous memory policy: 0 00:05:22.776 EAL: request: mp_malloc_sync 00:05:22.776 EAL: No shared files mode enabled, IPC is disabled 00:05:22.776 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.776 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.776 EAL: No shared files mode enabled, IPC is disabled 00:05:22.776 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.776 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.776 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:22.776 00:05:22.776 00:05:22.776 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.776 http://cunit.sourceforge.net/ 00:05:22.776 00:05:22.776 00:05:22.776 Suite: components_suite 00:05:23.038 Test: vtophys_malloc_test ...passed 00:05:23.038 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.038 EAL: Restoring previous memory policy: 4 00:05:23.038 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.038 EAL: request: mp_malloc_sync 00:05:23.038 EAL: No shared files mode enabled, IPC is disabled 00:05:23.038 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.038 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.038 EAL: request: mp_malloc_sync 00:05:23.038 EAL: No shared files mode enabled, IPC is disabled 00:05:23.038 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.038 EAL: Trying to obtain current memory policy. 00:05:23.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.038 EAL: Restoring previous memory policy: 4 00:05:23.038 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.038 EAL: request: mp_malloc_sync 00:05:23.038 EAL: No shared files mode enabled, IPC is disabled 00:05:23.038 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.038 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.038 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.039 EAL: Trying to obtain current memory policy. 00:05:23.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.039 EAL: Restoring previous memory policy: 4 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.039 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.039 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.039 EAL: Trying to obtain current memory policy. 00:05:23.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.039 EAL: Restoring previous memory policy: 4 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.039 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.039 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.039 EAL: Trying to obtain current memory policy. 00:05:23.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.039 EAL: Restoring previous memory policy: 4 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.039 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.039 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.039 EAL: Trying to obtain current memory policy. 00:05:23.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.039 EAL: Restoring previous memory policy: 4 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.039 EAL: request: mp_malloc_sync 00:05:23.039 EAL: No shared files mode enabled, IPC is disabled 00:05:23.039 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.301 EAL: request: mp_malloc_sync 00:05:23.301 EAL: No shared files mode enabled, IPC is disabled 00:05:23.301 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.301 EAL: Trying to obtain current memory policy. 00:05:23.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.301 EAL: Restoring previous memory policy: 4 00:05:23.301 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.301 EAL: request: mp_malloc_sync 00:05:23.301 EAL: No shared files mode enabled, IPC is disabled 00:05:23.301 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.301 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.301 EAL: request: mp_malloc_sync 00:05:23.301 EAL: No shared files mode enabled, IPC is disabled 00:05:23.301 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.301 EAL: Trying to obtain current memory policy. 00:05:23.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.301 EAL: Restoring previous memory policy: 4 00:05:23.301 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.301 EAL: request: mp_malloc_sync 00:05:23.301 EAL: No shared files mode enabled, IPC is disabled 00:05:23.301 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.301 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.301 EAL: request: mp_malloc_sync 00:05:23.301 EAL: No shared files mode enabled, IPC is disabled 00:05:23.301 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.301 EAL: Trying to obtain current memory policy. 00:05:23.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.562 EAL: Restoring previous memory policy: 4 00:05:23.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.562 EAL: request: mp_malloc_sync 00:05:23.562 EAL: No shared files mode enabled, IPC is disabled 00:05:23.562 EAL: Heap on socket 0 was expanded by 514MB 00:05:23.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.562 EAL: request: mp_malloc_sync 00:05:23.562 EAL: No shared files mode enabled, IPC is disabled 00:05:23.562 EAL: Heap on socket 0 was shrunk by 514MB 00:05:23.562 EAL: Trying to obtain current memory policy. 00:05:23.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.823 EAL: Restoring previous memory policy: 4 00:05:23.823 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.823 EAL: request: mp_malloc_sync 00:05:23.823 EAL: No shared files mode enabled, IPC is disabled 00:05:23.823 EAL: Heap on socket 0 was expanded by 1026MB 00:05:24.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.084 passed 00:05:24.084 00:05:24.084 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.084 suites 1 1 n/a 0 0 00:05:24.084 tests 2 2 2 0 0 00:05:24.084 asserts 5337 5337 5337 0 n/a 00:05:24.084 00:05:24.084 Elapsed time = 1.404 seconds 00:05:24.084 EAL: request: mp_malloc_sync 00:05:24.084 EAL: No shared files mode enabled, IPC is disabled 00:05:24.084 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:24.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.084 EAL: request: mp_malloc_sync 00:05:24.084 EAL: No shared files mode enabled, IPC is disabled 00:05:24.084 EAL: Heap on socket 0 was shrunk by 2MB 00:05:24.084 EAL: No shared files mode enabled, IPC is disabled 00:05:24.084 EAL: No shared files mode enabled, IPC is disabled 00:05:24.084 EAL: No shared files mode enabled, IPC is disabled 00:05:24.084 00:05:24.084 real 0m1.635s 00:05:24.084 user 0m0.646s 00:05:24.084 sys 0m0.847s 00:05:24.084 17:59:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.084 ************************************ 00:05:24.084 END TEST env_vtophys 00:05:24.084 ************************************ 00:05:24.084 17:59:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:24.345 17:59:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:24.345 17:59:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.345 17:59:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.345 17:59:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.345 ************************************ 00:05:24.345 START TEST env_pci 00:05:24.345 ************************************ 00:05:24.345 17:59:58 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:24.345 00:05:24.345 00:05:24.345 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.345 http://cunit.sourceforge.net/ 00:05:24.345 00:05:24.345 00:05:24.345 Suite: pci 00:05:24.345 Test: pci_hook ...[2024-12-13 17:59:58.528044] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70809 has claimed it 00:05:24.345 passed 00:05:24.345 00:05:24.345 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.345 suites 1 1 n/a 0 0 00:05:24.345 tests 1 1 1 0 0 00:05:24.345 asserts 25 25 25 0 n/a 00:05:24.345 00:05:24.345 Elapsed time = 0.005 seconds 00:05:24.345 EAL: Cannot find device (10000:00:01.0) 00:05:24.345 EAL: Failed to attach device on primary process 00:05:24.345 00:05:24.345 real 0m0.048s 00:05:24.345 user 0m0.018s 00:05:24.345 sys 0m0.030s 00:05:24.345 17:59:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.345 ************************************ 00:05:24.345 END TEST env_pci 00:05:24.345 ************************************ 00:05:24.345 17:59:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:24.345 17:59:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.345 17:59:58 env -- env/env.sh@15 -- # uname 00:05:24.345 17:59:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.345 17:59:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.345 17:59:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.345 17:59:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:24.345 17:59:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.345 17:59:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.345 ************************************ 00:05:24.345 START TEST env_dpdk_post_init 00:05:24.345 ************************************ 00:05:24.345 17:59:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.345 EAL: Detected CPU lcores: 10 00:05:24.345 EAL: Detected NUMA nodes: 1 00:05:24.345 EAL: Detected shared linkage of DPDK 00:05:24.345 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.345 EAL: Selected IOVA mode 'PA' 00:05:24.607 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.607 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:24.607 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:24.607 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:24.607 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:24.607 Starting DPDK initialization... 00:05:24.607 Starting SPDK post initialization... 00:05:24.607 SPDK NVMe probe 00:05:24.607 Attaching to 0000:00:10.0 00:05:24.607 Attaching to 0000:00:11.0 00:05:24.607 Attaching to 0000:00:12.0 00:05:24.607 Attaching to 0000:00:13.0 00:05:24.607 Attached to 0000:00:13.0 00:05:24.607 Attached to 0000:00:10.0 00:05:24.607 Attached to 0000:00:11.0 00:05:24.607 Attached to 0000:00:12.0 00:05:24.607 Cleaning up... 00:05:24.607 00:05:24.607 real 0m0.228s 00:05:24.607 user 0m0.066s 00:05:24.607 sys 0m0.062s 00:05:24.607 ************************************ 00:05:24.607 17:59:58 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.607 17:59:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.607 END TEST env_dpdk_post_init 00:05:24.607 ************************************ 00:05:24.607 17:59:58 env -- env/env.sh@26 -- # uname 00:05:24.607 17:59:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.607 17:59:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.607 17:59:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.607 17:59:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.607 17:59:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.607 ************************************ 00:05:24.607 START TEST env_mem_callbacks 00:05:24.607 ************************************ 00:05:24.607 17:59:58 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.607 EAL: Detected CPU lcores: 10 00:05:24.607 EAL: Detected NUMA nodes: 1 00:05:24.607 EAL: Detected shared linkage of DPDK 00:05:24.607 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.607 EAL: Selected IOVA mode 'PA' 00:05:24.869 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.869 00:05:24.869 00:05:24.869 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.869 http://cunit.sourceforge.net/ 00:05:24.869 00:05:24.869 00:05:24.869 Suite: memory 00:05:24.869 Test: test ... 00:05:24.869 register 0x200000200000 2097152 00:05:24.869 malloc 3145728 00:05:24.869 register 0x200000400000 4194304 00:05:24.869 buf 0x200000500000 len 3145728 PASSED 00:05:24.869 malloc 64 00:05:24.869 buf 0x2000004fff40 len 64 PASSED 00:05:24.869 malloc 4194304 00:05:24.869 register 0x200000800000 6291456 00:05:24.869 buf 0x200000a00000 len 4194304 PASSED 00:05:24.869 free 0x200000500000 3145728 00:05:24.869 free 0x2000004fff40 64 00:05:24.869 unregister 0x200000400000 4194304 PASSED 00:05:24.869 free 0x200000a00000 4194304 00:05:24.869 unregister 0x200000800000 6291456 PASSED 00:05:24.869 malloc 8388608 00:05:24.869 register 0x200000400000 10485760 00:05:24.869 buf 0x200000600000 len 8388608 PASSED 00:05:24.869 free 0x200000600000 8388608 00:05:24.869 unregister 0x200000400000 10485760 PASSED 00:05:24.869 passed 00:05:24.869 00:05:24.869 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.869 suites 1 1 n/a 0 0 00:05:24.869 tests 1 1 1 0 0 00:05:24.869 asserts 15 15 15 0 n/a 00:05:24.869 00:05:24.869 Elapsed time = 0.011 seconds 00:05:24.869 00:05:24.869 real 0m0.157s 00:05:24.869 user 0m0.021s 00:05:24.869 sys 0m0.034s 00:05:24.869 17:59:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.869 ************************************ 00:05:24.869 END TEST env_mem_callbacks 00:05:24.869 ************************************ 00:05:24.869 17:59:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:24.869 00:05:24.869 real 0m2.851s 00:05:24.869 user 0m1.167s 00:05:24.869 sys 0m1.207s 00:05:24.869 ************************************ 00:05:24.869 END TEST env 00:05:24.869 17:59:59 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.869 17:59:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.869 ************************************ 00:05:24.869 17:59:59 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.869 17:59:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.869 17:59:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.869 17:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:24.869 ************************************ 00:05:24.869 START TEST rpc 00:05:24.869 ************************************ 00:05:24.869 17:59:59 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:25.130 * Looking for test storage... 00:05:25.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.130 17:59:59 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.130 17:59:59 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.130 17:59:59 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.130 17:59:59 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.130 17:59:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.130 17:59:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.130 17:59:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.130 17:59:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.130 17:59:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.130 17:59:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.130 17:59:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.130 17:59:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:25.130 17:59:59 rpc -- scripts/common.sh@345 -- # : 1 00:05:25.130 17:59:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.130 17:59:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.130 17:59:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:25.130 17:59:59 rpc -- scripts/common.sh@353 -- # local d=1 00:05:25.130 17:59:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.130 17:59:59 rpc -- scripts/common.sh@355 -- # echo 1 00:05:25.130 17:59:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.130 17:59:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@353 -- # local d=2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.130 17:59:59 rpc -- scripts/common.sh@355 -- # echo 2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.130 17:59:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.130 17:59:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.130 17:59:59 rpc -- scripts/common.sh@368 -- # return 0 00:05:25.130 17:59:59 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.130 17:59:59 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.130 --rc genhtml_branch_coverage=1 00:05:25.130 --rc genhtml_function_coverage=1 00:05:25.130 --rc genhtml_legend=1 00:05:25.130 --rc geninfo_all_blocks=1 00:05:25.130 --rc geninfo_unexecuted_blocks=1 00:05:25.130 00:05:25.130 ' 00:05:25.130 17:59:59 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.130 --rc genhtml_branch_coverage=1 00:05:25.130 --rc genhtml_function_coverage=1 00:05:25.130 --rc genhtml_legend=1 00:05:25.131 --rc geninfo_all_blocks=1 00:05:25.131 --rc geninfo_unexecuted_blocks=1 00:05:25.131 00:05:25.131 ' 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.131 --rc genhtml_branch_coverage=1 00:05:25.131 --rc genhtml_function_coverage=1 00:05:25.131 --rc genhtml_legend=1 00:05:25.131 --rc geninfo_all_blocks=1 00:05:25.131 --rc geninfo_unexecuted_blocks=1 00:05:25.131 00:05:25.131 ' 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.131 --rc genhtml_branch_coverage=1 00:05:25.131 --rc genhtml_function_coverage=1 00:05:25.131 --rc genhtml_legend=1 00:05:25.131 --rc geninfo_all_blocks=1 00:05:25.131 --rc geninfo_unexecuted_blocks=1 00:05:25.131 00:05:25.131 ' 00:05:25.131 17:59:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70930 00:05:25.131 17:59:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.131 17:59:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70930 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 70930 ']' 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.131 17:59:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:25.131 17:59:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.131 [2024-12-13 17:59:59.433667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:25.131 [2024-12-13 17:59:59.433827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70930 ] 00:05:25.392 [2024-12-13 17:59:59.577659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.392 [2024-12-13 17:59:59.607898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.392 [2024-12-13 17:59:59.607978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70930' to capture a snapshot of events at runtime. 00:05:25.392 [2024-12-13 17:59:59.607999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.392 [2024-12-13 17:59:59.608011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.392 [2024-12-13 17:59:59.608025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70930 for offline analysis/debug. 00:05:25.392 [2024-12-13 17:59:59.608444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.965 18:00:00 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.965 18:00:00 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.965 18:00:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.965 18:00:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.965 18:00:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.965 18:00:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.965 18:00:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.965 18:00:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.965 18:00:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.965 ************************************ 00:05:25.965 START TEST rpc_integrity 00:05:25.965 ************************************ 00:05:25.965 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:25.965 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.965 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.965 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.965 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.965 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.966 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.227 { 00:05:26.227 "name": "Malloc0", 00:05:26.227 "aliases": [ 00:05:26.227 "47599ed7-a8e1-4170-a84d-85afe0c0548e" 00:05:26.227 ], 00:05:26.227 "product_name": "Malloc disk", 00:05:26.227 "block_size": 512, 00:05:26.227 "num_blocks": 16384, 00:05:26.227 "uuid": "47599ed7-a8e1-4170-a84d-85afe0c0548e", 00:05:26.227 "assigned_rate_limits": { 00:05:26.227 "rw_ios_per_sec": 0, 00:05:26.227 "rw_mbytes_per_sec": 0, 00:05:26.227 "r_mbytes_per_sec": 0, 00:05:26.227 "w_mbytes_per_sec": 0 00:05:26.227 }, 00:05:26.227 "claimed": false, 00:05:26.227 "zoned": false, 00:05:26.227 "supported_io_types": { 00:05:26.227 "read": true, 00:05:26.227 "write": true, 00:05:26.227 "unmap": true, 00:05:26.227 "flush": true, 00:05:26.227 "reset": true, 00:05:26.227 "nvme_admin": false, 00:05:26.227 "nvme_io": false, 00:05:26.227 "nvme_io_md": false, 00:05:26.227 "write_zeroes": true, 00:05:26.227 "zcopy": true, 00:05:26.227 "get_zone_info": false, 00:05:26.227 "zone_management": false, 00:05:26.227 "zone_append": false, 00:05:26.227 "compare": false, 00:05:26.227 "compare_and_write": false, 00:05:26.227 "abort": true, 00:05:26.227 "seek_hole": false, 00:05:26.227 "seek_data": false, 00:05:26.227 "copy": true, 00:05:26.227 "nvme_iov_md": false 00:05:26.227 }, 00:05:26.227 "memory_domains": [ 00:05:26.227 { 00:05:26.227 "dma_device_id": "system", 00:05:26.227 "dma_device_type": 1 00:05:26.227 }, 00:05:26.227 { 00:05:26.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.227 "dma_device_type": 2 00:05:26.227 } 00:05:26.227 ], 00:05:26.227 "driver_specific": {} 00:05:26.227 } 00:05:26.227 ]' 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.227 [2024-12-13 18:00:00.422521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.227 [2024-12-13 18:00:00.422615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.227 [2024-12-13 18:00:00.422649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:26.227 [2024-12-13 18:00:00.422661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.227 [2024-12-13 18:00:00.425324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.227 [2024-12-13 18:00:00.425383] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.227 Passthru0 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.227 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.227 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.227 { 00:05:26.227 "name": "Malloc0", 00:05:26.227 "aliases": [ 00:05:26.227 "47599ed7-a8e1-4170-a84d-85afe0c0548e" 00:05:26.227 ], 00:05:26.227 "product_name": "Malloc disk", 00:05:26.227 "block_size": 512, 00:05:26.227 "num_blocks": 16384, 00:05:26.227 "uuid": "47599ed7-a8e1-4170-a84d-85afe0c0548e", 00:05:26.227 "assigned_rate_limits": { 00:05:26.227 "rw_ios_per_sec": 0, 00:05:26.227 "rw_mbytes_per_sec": 0, 00:05:26.227 "r_mbytes_per_sec": 0, 00:05:26.227 "w_mbytes_per_sec": 0 00:05:26.227 }, 00:05:26.227 "claimed": true, 00:05:26.227 "claim_type": "exclusive_write", 00:05:26.227 "zoned": false, 00:05:26.227 "supported_io_types": { 00:05:26.227 "read": true, 00:05:26.227 "write": true, 00:05:26.227 "unmap": true, 00:05:26.227 "flush": true, 00:05:26.227 "reset": true, 00:05:26.227 "nvme_admin": false, 00:05:26.227 "nvme_io": false, 00:05:26.227 "nvme_io_md": false, 00:05:26.227 "write_zeroes": true, 00:05:26.227 "zcopy": true, 00:05:26.227 "get_zone_info": false, 00:05:26.227 "zone_management": false, 00:05:26.227 "zone_append": false, 00:05:26.227 "compare": false, 00:05:26.227 "compare_and_write": false, 00:05:26.227 "abort": true, 00:05:26.227 "seek_hole": false, 00:05:26.227 "seek_data": false, 00:05:26.227 "copy": true, 00:05:26.227 "nvme_iov_md": false 00:05:26.227 }, 00:05:26.227 "memory_domains": [ 00:05:26.227 { 00:05:26.227 "dma_device_id": "system", 00:05:26.227 "dma_device_type": 1 00:05:26.227 }, 00:05:26.227 { 00:05:26.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.227 "dma_device_type": 2 00:05:26.227 } 00:05:26.227 ], 00:05:26.227 "driver_specific": {} 00:05:26.227 }, 00:05:26.227 { 00:05:26.227 "name": "Passthru0", 00:05:26.227 "aliases": [ 00:05:26.227 "ca4bc4f2-ba8b-5c29-bcb4-432416f7d2fb" 00:05:26.227 ], 00:05:26.227 "product_name": "passthru", 00:05:26.227 "block_size": 512, 00:05:26.227 "num_blocks": 16384, 00:05:26.227 "uuid": "ca4bc4f2-ba8b-5c29-bcb4-432416f7d2fb", 00:05:26.228 "assigned_rate_limits": { 00:05:26.228 "rw_ios_per_sec": 0, 00:05:26.228 "rw_mbytes_per_sec": 0, 00:05:26.228 "r_mbytes_per_sec": 0, 00:05:26.228 "w_mbytes_per_sec": 0 00:05:26.228 }, 00:05:26.228 "claimed": false, 00:05:26.228 "zoned": false, 00:05:26.228 "supported_io_types": { 00:05:26.228 "read": true, 00:05:26.228 "write": true, 00:05:26.228 "unmap": true, 00:05:26.228 "flush": true, 00:05:26.228 "reset": true, 00:05:26.228 "nvme_admin": false, 00:05:26.228 "nvme_io": false, 00:05:26.228 "nvme_io_md": false, 00:05:26.228 "write_zeroes": true, 00:05:26.228 "zcopy": true, 00:05:26.228 "get_zone_info": false, 00:05:26.228 "zone_management": false, 00:05:26.228 "zone_append": false, 00:05:26.228 "compare": false, 00:05:26.228 "compare_and_write": false, 00:05:26.228 "abort": true, 00:05:26.228 "seek_hole": false, 00:05:26.228 "seek_data": false, 00:05:26.228 "copy": true, 00:05:26.228 "nvme_iov_md": false 00:05:26.228 }, 00:05:26.228 "memory_domains": [ 00:05:26.228 { 00:05:26.228 "dma_device_id": "system", 00:05:26.228 "dma_device_type": 1 00:05:26.228 }, 00:05:26.228 { 00:05:26.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.228 "dma_device_type": 2 00:05:26.228 } 00:05:26.228 ], 00:05:26.228 "driver_specific": { 00:05:26.228 "passthru": { 00:05:26.228 "name": "Passthru0", 00:05:26.228 "base_bdev_name": "Malloc0" 00:05:26.228 } 00:05:26.228 } 00:05:26.228 } 00:05:26.228 ]' 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.228 18:00:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.228 00:05:26.228 real 0m0.228s 00:05:26.228 user 0m0.128s 00:05:26.228 sys 0m0.035s 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.228 18:00:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 ************************************ 00:05:26.228 END TEST rpc_integrity 00:05:26.228 ************************************ 00:05:26.228 18:00:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.228 18:00:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.228 18:00:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.228 18:00:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 ************************************ 00:05:26.489 START TEST rpc_plugins 00:05:26.489 ************************************ 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.489 { 00:05:26.489 "name": "Malloc1", 00:05:26.489 "aliases": [ 00:05:26.489 "4c0338ad-afea-4347-a927-80c9adc18edb" 00:05:26.489 ], 00:05:26.489 "product_name": "Malloc disk", 00:05:26.489 "block_size": 4096, 00:05:26.489 "num_blocks": 256, 00:05:26.489 "uuid": "4c0338ad-afea-4347-a927-80c9adc18edb", 00:05:26.489 "assigned_rate_limits": { 00:05:26.489 "rw_ios_per_sec": 0, 00:05:26.489 "rw_mbytes_per_sec": 0, 00:05:26.489 "r_mbytes_per_sec": 0, 00:05:26.489 "w_mbytes_per_sec": 0 00:05:26.489 }, 00:05:26.489 "claimed": false, 00:05:26.489 "zoned": false, 00:05:26.489 "supported_io_types": { 00:05:26.489 "read": true, 00:05:26.489 "write": true, 00:05:26.489 "unmap": true, 00:05:26.489 "flush": true, 00:05:26.489 "reset": true, 00:05:26.489 "nvme_admin": false, 00:05:26.489 "nvme_io": false, 00:05:26.489 "nvme_io_md": false, 00:05:26.489 "write_zeroes": true, 00:05:26.489 "zcopy": true, 00:05:26.489 "get_zone_info": false, 00:05:26.489 "zone_management": false, 00:05:26.489 "zone_append": false, 00:05:26.489 "compare": false, 00:05:26.489 "compare_and_write": false, 00:05:26.489 "abort": true, 00:05:26.489 "seek_hole": false, 00:05:26.489 "seek_data": false, 00:05:26.489 "copy": true, 00:05:26.489 "nvme_iov_md": false 00:05:26.489 }, 00:05:26.489 "memory_domains": [ 00:05:26.489 { 00:05:26.489 "dma_device_id": "system", 00:05:26.489 "dma_device_type": 1 00:05:26.489 }, 00:05:26.489 { 00:05:26.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.489 "dma_device_type": 2 00:05:26.489 } 00:05:26.489 ], 00:05:26.489 "driver_specific": {} 00:05:26.489 } 00:05:26.489 ]' 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.489 18:00:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.489 00:05:26.489 real 0m0.113s 00:05:26.489 user 0m0.063s 00:05:26.489 sys 0m0.015s 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.489 18:00:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 ************************************ 00:05:26.489 END TEST rpc_plugins 00:05:26.489 ************************************ 00:05:26.489 18:00:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.489 18:00:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.489 18:00:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.489 18:00:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 ************************************ 00:05:26.489 START TEST rpc_trace_cmd_test 00:05:26.489 ************************************ 00:05:26.489 18:00:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:26.489 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.489 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.489 18:00:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.489 18:00:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.489 18:00:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.489 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.489 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70930", 00:05:26.489 "tpoint_group_mask": "0x8", 00:05:26.489 "iscsi_conn": { 00:05:26.489 "mask": "0x2", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "scsi": { 00:05:26.489 "mask": "0x4", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "bdev": { 00:05:26.489 "mask": "0x8", 00:05:26.489 "tpoint_mask": "0xffffffffffffffff" 00:05:26.489 }, 00:05:26.489 "nvmf_rdma": { 00:05:26.489 "mask": "0x10", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "nvmf_tcp": { 00:05:26.489 "mask": "0x20", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "ftl": { 00:05:26.489 "mask": "0x40", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "blobfs": { 00:05:26.489 "mask": "0x80", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "dsa": { 00:05:26.489 "mask": "0x200", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "thread": { 00:05:26.489 "mask": "0x400", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "nvme_pcie": { 00:05:26.489 "mask": "0x800", 00:05:26.489 "tpoint_mask": "0x0" 00:05:26.489 }, 00:05:26.489 "iaa": { 00:05:26.490 "mask": "0x1000", 00:05:26.490 "tpoint_mask": "0x0" 00:05:26.490 }, 00:05:26.490 "nvme_tcp": { 00:05:26.490 "mask": "0x2000", 00:05:26.490 "tpoint_mask": "0x0" 00:05:26.490 }, 00:05:26.490 "bdev_nvme": { 00:05:26.490 "mask": "0x4000", 00:05:26.490 "tpoint_mask": "0x0" 00:05:26.490 }, 00:05:26.490 "sock": { 00:05:26.490 "mask": "0x8000", 00:05:26.490 "tpoint_mask": "0x0" 00:05:26.490 }, 00:05:26.490 "blob": { 00:05:26.490 "mask": "0x10000", 00:05:26.490 "tpoint_mask": "0x0" 00:05:26.490 }, 00:05:26.490 "bdev_raid": { 00:05:26.490 "mask": "0x20000", 00:05:26.490 "tpoint_mask": "0x0" 00:05:26.490 }, 00:05:26.490 "scheduler": { 00:05:26.490 "mask": "0x40000", 00:05:26.490 "tpoint_mask": "0x0" 00:05:26.490 } 00:05:26.490 }' 00:05:26.490 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.490 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:26.490 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.490 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.490 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.750 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.750 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.750 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.750 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.750 18:00:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.750 00:05:26.750 real 0m0.175s 00:05:26.750 user 0m0.138s 00:05:26.750 sys 0m0.028s 00:05:26.750 ************************************ 00:05:26.750 END TEST rpc_trace_cmd_test 00:05:26.750 ************************************ 00:05:26.750 18:00:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.750 18:00:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.750 18:00:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.750 18:00:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.750 18:00:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.750 18:00:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.750 18:00:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.750 18:00:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.750 ************************************ 00:05:26.750 START TEST rpc_daemon_integrity 00:05:26.750 ************************************ 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.750 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.750 { 00:05:26.750 "name": "Malloc2", 00:05:26.750 "aliases": [ 00:05:26.750 "d0ef09ae-0262-4162-9aee-696f65840712" 00:05:26.750 ], 00:05:26.750 "product_name": "Malloc disk", 00:05:26.750 "block_size": 512, 00:05:26.750 "num_blocks": 16384, 00:05:26.750 "uuid": "d0ef09ae-0262-4162-9aee-696f65840712", 00:05:26.750 "assigned_rate_limits": { 00:05:26.750 "rw_ios_per_sec": 0, 00:05:26.750 "rw_mbytes_per_sec": 0, 00:05:26.750 "r_mbytes_per_sec": 0, 00:05:26.750 "w_mbytes_per_sec": 0 00:05:26.750 }, 00:05:26.750 "claimed": false, 00:05:26.750 "zoned": false, 00:05:26.750 "supported_io_types": { 00:05:26.750 "read": true, 00:05:26.750 "write": true, 00:05:26.750 "unmap": true, 00:05:26.750 "flush": true, 00:05:26.750 "reset": true, 00:05:26.750 "nvme_admin": false, 00:05:26.750 "nvme_io": false, 00:05:26.750 "nvme_io_md": false, 00:05:26.750 "write_zeroes": true, 00:05:26.750 "zcopy": true, 00:05:26.750 "get_zone_info": false, 00:05:26.750 "zone_management": false, 00:05:26.750 "zone_append": false, 00:05:26.750 "compare": false, 00:05:26.750 "compare_and_write": false, 00:05:26.750 "abort": true, 00:05:26.750 "seek_hole": false, 00:05:26.750 "seek_data": false, 00:05:26.750 "copy": true, 00:05:26.750 "nvme_iov_md": false 00:05:26.750 }, 00:05:26.751 "memory_domains": [ 00:05:26.751 { 00:05:26.751 "dma_device_id": "system", 00:05:26.751 "dma_device_type": 1 00:05:26.751 }, 00:05:26.751 { 00:05:26.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.751 "dma_device_type": 2 00:05:26.751 } 00:05:26.751 ], 00:05:26.751 "driver_specific": {} 00:05:26.751 } 00:05:26.751 ]' 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.751 [2024-12-13 18:00:01.115630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.751 [2024-12-13 18:00:01.115705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.751 [2024-12-13 18:00:01.115733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:26.751 [2024-12-13 18:00:01.115743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.751 [2024-12-13 18:00:01.118271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.751 [2024-12-13 18:00:01.118334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.751 Passthru0 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.751 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.012 { 00:05:27.012 "name": "Malloc2", 00:05:27.012 "aliases": [ 00:05:27.012 "d0ef09ae-0262-4162-9aee-696f65840712" 00:05:27.012 ], 00:05:27.012 "product_name": "Malloc disk", 00:05:27.012 "block_size": 512, 00:05:27.012 "num_blocks": 16384, 00:05:27.012 "uuid": "d0ef09ae-0262-4162-9aee-696f65840712", 00:05:27.012 "assigned_rate_limits": { 00:05:27.012 "rw_ios_per_sec": 0, 00:05:27.012 "rw_mbytes_per_sec": 0, 00:05:27.012 "r_mbytes_per_sec": 0, 00:05:27.012 "w_mbytes_per_sec": 0 00:05:27.012 }, 00:05:27.012 "claimed": true, 00:05:27.012 "claim_type": "exclusive_write", 00:05:27.012 "zoned": false, 00:05:27.012 "supported_io_types": { 00:05:27.012 "read": true, 00:05:27.012 "write": true, 00:05:27.012 "unmap": true, 00:05:27.012 "flush": true, 00:05:27.012 "reset": true, 00:05:27.012 "nvme_admin": false, 00:05:27.012 "nvme_io": false, 00:05:27.012 "nvme_io_md": false, 00:05:27.012 "write_zeroes": true, 00:05:27.012 "zcopy": true, 00:05:27.012 "get_zone_info": false, 00:05:27.012 "zone_management": false, 00:05:27.012 "zone_append": false, 00:05:27.012 "compare": false, 00:05:27.012 "compare_and_write": false, 00:05:27.012 "abort": true, 00:05:27.012 "seek_hole": false, 00:05:27.012 "seek_data": false, 00:05:27.012 "copy": true, 00:05:27.012 "nvme_iov_md": false 00:05:27.012 }, 00:05:27.012 "memory_domains": [ 00:05:27.012 { 00:05:27.012 "dma_device_id": "system", 00:05:27.012 "dma_device_type": 1 00:05:27.012 }, 00:05:27.012 { 00:05:27.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.012 "dma_device_type": 2 00:05:27.012 } 00:05:27.012 ], 00:05:27.012 "driver_specific": {} 00:05:27.012 }, 00:05:27.012 { 00:05:27.012 "name": "Passthru0", 00:05:27.012 "aliases": [ 00:05:27.012 "322aad4c-b287-5cc1-a460-941ff5c6bb01" 00:05:27.012 ], 00:05:27.012 "product_name": "passthru", 00:05:27.012 "block_size": 512, 00:05:27.012 "num_blocks": 16384, 00:05:27.012 "uuid": "322aad4c-b287-5cc1-a460-941ff5c6bb01", 00:05:27.012 "assigned_rate_limits": { 00:05:27.012 "rw_ios_per_sec": 0, 00:05:27.012 "rw_mbytes_per_sec": 0, 00:05:27.012 "r_mbytes_per_sec": 0, 00:05:27.012 "w_mbytes_per_sec": 0 00:05:27.012 }, 00:05:27.012 "claimed": false, 00:05:27.012 "zoned": false, 00:05:27.012 "supported_io_types": { 00:05:27.012 "read": true, 00:05:27.012 "write": true, 00:05:27.012 "unmap": true, 00:05:27.012 "flush": true, 00:05:27.012 "reset": true, 00:05:27.012 "nvme_admin": false, 00:05:27.012 "nvme_io": false, 00:05:27.012 "nvme_io_md": false, 00:05:27.012 "write_zeroes": true, 00:05:27.012 "zcopy": true, 00:05:27.012 "get_zone_info": false, 00:05:27.012 "zone_management": false, 00:05:27.012 "zone_append": false, 00:05:27.012 "compare": false, 00:05:27.012 "compare_and_write": false, 00:05:27.012 "abort": true, 00:05:27.012 "seek_hole": false, 00:05:27.012 "seek_data": false, 00:05:27.012 "copy": true, 00:05:27.012 "nvme_iov_md": false 00:05:27.012 }, 00:05:27.012 "memory_domains": [ 00:05:27.012 { 00:05:27.012 "dma_device_id": "system", 00:05:27.012 "dma_device_type": 1 00:05:27.012 }, 00:05:27.012 { 00:05:27.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.012 "dma_device_type": 2 00:05:27.012 } 00:05:27.012 ], 00:05:27.012 "driver_specific": { 00:05:27.012 "passthru": { 00:05:27.012 "name": "Passthru0", 00:05:27.012 "base_bdev_name": "Malloc2" 00:05:27.012 } 00:05:27.012 } 00:05:27.012 } 00:05:27.012 ]' 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.012 00:05:27.012 real 0m0.222s 00:05:27.012 user 0m0.126s 00:05:27.012 sys 0m0.035s 00:05:27.012 ************************************ 00:05:27.012 END TEST rpc_daemon_integrity 00:05:27.012 ************************************ 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.012 18:00:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.012 18:00:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.012 18:00:01 rpc -- rpc/rpc.sh@84 -- # killprocess 70930 00:05:27.012 18:00:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 70930 ']' 00:05:27.012 18:00:01 rpc -- common/autotest_common.sh@958 -- # kill -0 70930 00:05:27.012 18:00:01 rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.013 18:00:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.013 18:00:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70930 00:05:27.013 18:00:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.013 killing process with pid 70930 00:05:27.013 18:00:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.013 18:00:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70930' 00:05:27.013 18:00:01 rpc -- common/autotest_common.sh@973 -- # kill 70930 00:05:27.013 18:00:01 rpc -- common/autotest_common.sh@978 -- # wait 70930 00:05:27.274 00:05:27.274 real 0m2.403s 00:05:27.274 user 0m2.797s 00:05:27.274 sys 0m0.664s 00:05:27.274 18:00:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.274 ************************************ 00:05:27.274 END TEST rpc 00:05:27.274 ************************************ 00:05:27.274 18:00:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.274 18:00:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:27.274 18:00:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.274 18:00:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.274 18:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:27.536 ************************************ 00:05:27.536 START TEST skip_rpc 00:05:27.536 ************************************ 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:27.536 * Looking for test storage... 00:05:27.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.536 18:00:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.536 --rc genhtml_branch_coverage=1 00:05:27.536 --rc genhtml_function_coverage=1 00:05:27.536 --rc genhtml_legend=1 00:05:27.536 --rc geninfo_all_blocks=1 00:05:27.536 --rc geninfo_unexecuted_blocks=1 00:05:27.536 00:05:27.536 ' 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.536 --rc genhtml_branch_coverage=1 00:05:27.536 --rc genhtml_function_coverage=1 00:05:27.536 --rc genhtml_legend=1 00:05:27.536 --rc geninfo_all_blocks=1 00:05:27.536 --rc geninfo_unexecuted_blocks=1 00:05:27.536 00:05:27.536 ' 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.536 --rc genhtml_branch_coverage=1 00:05:27.536 --rc genhtml_function_coverage=1 00:05:27.536 --rc genhtml_legend=1 00:05:27.536 --rc geninfo_all_blocks=1 00:05:27.536 --rc geninfo_unexecuted_blocks=1 00:05:27.536 00:05:27.536 ' 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.536 --rc genhtml_branch_coverage=1 00:05:27.536 --rc genhtml_function_coverage=1 00:05:27.536 --rc genhtml_legend=1 00:05:27.536 --rc geninfo_all_blocks=1 00:05:27.536 --rc geninfo_unexecuted_blocks=1 00:05:27.536 00:05:27.536 ' 00:05:27.536 18:00:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.536 18:00:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.536 18:00:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.536 18:00:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.536 ************************************ 00:05:27.536 START TEST skip_rpc 00:05:27.536 ************************************ 00:05:27.536 18:00:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:27.536 18:00:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71132 00:05:27.536 18:00:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.536 18:00:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:27.536 18:00:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:27.536 [2024-12-13 18:00:01.911914] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:27.536 [2024-12-13 18:00:01.912046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71132 ] 00:05:27.798 [2024-12-13 18:00:02.059788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.798 [2024-12-13 18:00:02.089693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71132 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71132 ']' 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71132 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71132 00:05:33.145 killing process with pid 71132 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71132' 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71132 00:05:33.145 18:00:06 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71132 00:05:33.145 ************************************ 00:05:33.145 END TEST skip_rpc 00:05:33.145 ************************************ 00:05:33.145 00:05:33.145 real 0m5.261s 00:05:33.145 user 0m4.856s 00:05:33.145 sys 0m0.303s 00:05:33.145 18:00:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.145 18:00:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.145 18:00:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:33.145 18:00:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.145 18:00:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.145 18:00:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.145 ************************************ 00:05:33.145 START TEST skip_rpc_with_json 00:05:33.145 ************************************ 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71219 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71219 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71219 ']' 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.145 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.145 [2024-12-13 18:00:07.209215] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:33.145 [2024-12-13 18:00:07.209351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71219 ] 00:05:33.145 [2024-12-13 18:00:07.340110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.145 [2024-12-13 18:00:07.356790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.712 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.712 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:33.712 18:00:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.712 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.712 18:00:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.712 [2024-12-13 18:00:08.001731] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:33.712 request: 00:05:33.712 { 00:05:33.712 "trtype": "tcp", 00:05:33.712 "method": "nvmf_get_transports", 00:05:33.712 "req_id": 1 00:05:33.712 } 00:05:33.712 Got JSON-RPC error response 00:05:33.712 response: 00:05:33.712 { 00:05:33.712 "code": -19, 00:05:33.712 "message": "No such device" 00:05:33.712 } 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.712 [2024-12-13 18:00:08.013832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.712 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.971 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.971 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.971 { 00:05:33.971 "subsystems": [ 00:05:33.971 { 00:05:33.971 "subsystem": "fsdev", 00:05:33.971 "config": [ 00:05:33.971 { 00:05:33.971 "method": "fsdev_set_opts", 00:05:33.971 "params": { 00:05:33.971 "fsdev_io_pool_size": 65535, 00:05:33.971 "fsdev_io_cache_size": 256 00:05:33.971 } 00:05:33.971 } 00:05:33.971 ] 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "subsystem": "keyring", 00:05:33.971 "config": [] 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "subsystem": "iobuf", 00:05:33.971 "config": [ 00:05:33.971 { 00:05:33.971 "method": "iobuf_set_options", 00:05:33.971 "params": { 00:05:33.971 "small_pool_count": 8192, 00:05:33.971 "large_pool_count": 1024, 00:05:33.971 "small_bufsize": 8192, 00:05:33.971 "large_bufsize": 135168, 00:05:33.971 "enable_numa": false 00:05:33.971 } 00:05:33.971 } 00:05:33.971 ] 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "subsystem": "sock", 00:05:33.971 "config": [ 00:05:33.971 { 00:05:33.971 "method": "sock_set_default_impl", 00:05:33.971 "params": { 00:05:33.971 "impl_name": "posix" 00:05:33.971 } 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "method": "sock_impl_set_options", 00:05:33.971 "params": { 00:05:33.971 "impl_name": "ssl", 00:05:33.971 "recv_buf_size": 4096, 00:05:33.971 "send_buf_size": 4096, 00:05:33.971 "enable_recv_pipe": true, 00:05:33.971 "enable_quickack": false, 00:05:33.971 "enable_placement_id": 0, 00:05:33.971 "enable_zerocopy_send_server": true, 00:05:33.971 "enable_zerocopy_send_client": false, 00:05:33.971 "zerocopy_threshold": 0, 00:05:33.971 "tls_version": 0, 00:05:33.971 "enable_ktls": false 00:05:33.971 } 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "method": "sock_impl_set_options", 00:05:33.971 "params": { 00:05:33.971 "impl_name": "posix", 00:05:33.971 "recv_buf_size": 2097152, 00:05:33.971 "send_buf_size": 2097152, 00:05:33.971 "enable_recv_pipe": true, 00:05:33.971 "enable_quickack": false, 00:05:33.971 "enable_placement_id": 0, 00:05:33.971 "enable_zerocopy_send_server": true, 00:05:33.971 "enable_zerocopy_send_client": false, 00:05:33.971 "zerocopy_threshold": 0, 00:05:33.971 "tls_version": 0, 00:05:33.971 "enable_ktls": false 00:05:33.971 } 00:05:33.971 } 00:05:33.971 ] 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "subsystem": "vmd", 00:05:33.971 "config": [] 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "subsystem": "accel", 00:05:33.971 "config": [ 00:05:33.971 { 00:05:33.971 "method": "accel_set_options", 00:05:33.971 "params": { 00:05:33.971 "small_cache_size": 128, 00:05:33.971 "large_cache_size": 16, 00:05:33.971 "task_count": 2048, 00:05:33.971 "sequence_count": 2048, 00:05:33.971 "buf_count": 2048 00:05:33.971 } 00:05:33.971 } 00:05:33.971 ] 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "subsystem": "bdev", 00:05:33.971 "config": [ 00:05:33.971 { 00:05:33.971 "method": "bdev_set_options", 00:05:33.971 "params": { 00:05:33.971 "bdev_io_pool_size": 65535, 00:05:33.971 "bdev_io_cache_size": 256, 00:05:33.971 "bdev_auto_examine": true, 00:05:33.971 "iobuf_small_cache_size": 128, 00:05:33.971 "iobuf_large_cache_size": 16 00:05:33.971 } 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "method": "bdev_raid_set_options", 00:05:33.971 "params": { 00:05:33.971 "process_window_size_kb": 1024, 00:05:33.971 "process_max_bandwidth_mb_sec": 0 00:05:33.971 } 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "method": "bdev_iscsi_set_options", 00:05:33.971 "params": { 00:05:33.971 "timeout_sec": 30 00:05:33.971 } 00:05:33.971 }, 00:05:33.971 { 00:05:33.971 "method": "bdev_nvme_set_options", 00:05:33.971 "params": { 00:05:33.971 "action_on_timeout": "none", 00:05:33.971 "timeout_us": 0, 00:05:33.971 "timeout_admin_us": 0, 00:05:33.971 "keep_alive_timeout_ms": 10000, 00:05:33.971 "arbitration_burst": 0, 00:05:33.971 "low_priority_weight": 0, 00:05:33.971 "medium_priority_weight": 0, 00:05:33.971 "high_priority_weight": 0, 00:05:33.971 "nvme_adminq_poll_period_us": 10000, 00:05:33.971 "nvme_ioq_poll_period_us": 0, 00:05:33.971 "io_queue_requests": 0, 00:05:33.971 "delay_cmd_submit": true, 00:05:33.971 "transport_retry_count": 4, 00:05:33.971 "bdev_retry_count": 3, 00:05:33.971 "transport_ack_timeout": 0, 00:05:33.971 "ctrlr_loss_timeout_sec": 0, 00:05:33.971 "reconnect_delay_sec": 0, 00:05:33.971 "fast_io_fail_timeout_sec": 0, 00:05:33.971 "disable_auto_failback": false, 00:05:33.971 "generate_uuids": false, 00:05:33.971 "transport_tos": 0, 00:05:33.971 "nvme_error_stat": false, 00:05:33.971 "rdma_srq_size": 0, 00:05:33.971 "io_path_stat": false, 00:05:33.971 "allow_accel_sequence": false, 00:05:33.971 "rdma_max_cq_size": 0, 00:05:33.971 "rdma_cm_event_timeout_ms": 0, 00:05:33.971 "dhchap_digests": [ 00:05:33.971 "sha256", 00:05:33.971 "sha384", 00:05:33.971 "sha512" 00:05:33.971 ], 00:05:33.971 "dhchap_dhgroups": [ 00:05:33.971 "null", 00:05:33.972 "ffdhe2048", 00:05:33.972 "ffdhe3072", 00:05:33.972 "ffdhe4096", 00:05:33.972 "ffdhe6144", 00:05:33.972 "ffdhe8192" 00:05:33.972 ], 00:05:33.972 "rdma_umr_per_io": false 00:05:33.972 } 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "method": "bdev_nvme_set_hotplug", 00:05:33.972 "params": { 00:05:33.972 "period_us": 100000, 00:05:33.972 "enable": false 00:05:33.972 } 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "method": "bdev_wait_for_examine" 00:05:33.972 } 00:05:33.972 ] 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "scsi", 00:05:33.972 "config": null 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "scheduler", 00:05:33.972 "config": [ 00:05:33.972 { 00:05:33.972 "method": "framework_set_scheduler", 00:05:33.972 "params": { 00:05:33.972 "name": "static" 00:05:33.972 } 00:05:33.972 } 00:05:33.972 ] 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "vhost_scsi", 00:05:33.972 "config": [] 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "vhost_blk", 00:05:33.972 "config": [] 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "ublk", 00:05:33.972 "config": [] 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "nbd", 00:05:33.972 "config": [] 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "nvmf", 00:05:33.972 "config": [ 00:05:33.972 { 00:05:33.972 "method": "nvmf_set_config", 00:05:33.972 "params": { 00:05:33.972 "discovery_filter": "match_any", 00:05:33.972 "admin_cmd_passthru": { 00:05:33.972 "identify_ctrlr": false 00:05:33.972 }, 00:05:33.972 "dhchap_digests": [ 00:05:33.972 "sha256", 00:05:33.972 "sha384", 00:05:33.972 "sha512" 00:05:33.972 ], 00:05:33.972 "dhchap_dhgroups": [ 00:05:33.972 "null", 00:05:33.972 "ffdhe2048", 00:05:33.972 "ffdhe3072", 00:05:33.972 "ffdhe4096", 00:05:33.972 "ffdhe6144", 00:05:33.972 "ffdhe8192" 00:05:33.972 ] 00:05:33.972 } 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "method": "nvmf_set_max_subsystems", 00:05:33.972 "params": { 00:05:33.972 "max_subsystems": 1024 00:05:33.972 } 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "method": "nvmf_set_crdt", 00:05:33.972 "params": { 00:05:33.972 "crdt1": 0, 00:05:33.972 "crdt2": 0, 00:05:33.972 "crdt3": 0 00:05:33.972 } 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "method": "nvmf_create_transport", 00:05:33.972 "params": { 00:05:33.972 "trtype": "TCP", 00:05:33.972 "max_queue_depth": 128, 00:05:33.972 "max_io_qpairs_per_ctrlr": 127, 00:05:33.972 "in_capsule_data_size": 4096, 00:05:33.972 "max_io_size": 131072, 00:05:33.972 "io_unit_size": 131072, 00:05:33.972 "max_aq_depth": 128, 00:05:33.972 "num_shared_buffers": 511, 00:05:33.972 "buf_cache_size": 4294967295, 00:05:33.972 "dif_insert_or_strip": false, 00:05:33.972 "zcopy": false, 00:05:33.972 "c2h_success": true, 00:05:33.972 "sock_priority": 0, 00:05:33.972 "abort_timeout_sec": 1, 00:05:33.972 "ack_timeout": 0, 00:05:33.972 "data_wr_pool_size": 0 00:05:33.972 } 00:05:33.972 } 00:05:33.972 ] 00:05:33.972 }, 00:05:33.972 { 00:05:33.972 "subsystem": "iscsi", 00:05:33.972 "config": [ 00:05:33.972 { 00:05:33.972 "method": "iscsi_set_options", 00:05:33.972 "params": { 00:05:33.972 "node_base": "iqn.2016-06.io.spdk", 00:05:33.972 "max_sessions": 128, 00:05:33.972 "max_connections_per_session": 2, 00:05:33.972 "max_queue_depth": 64, 00:05:33.972 "default_time2wait": 2, 00:05:33.972 "default_time2retain": 20, 00:05:33.972 "first_burst_length": 8192, 00:05:33.972 "immediate_data": true, 00:05:33.972 "allow_duplicated_isid": false, 00:05:33.972 "error_recovery_level": 0, 00:05:33.972 "nop_timeout": 60, 00:05:33.972 "nop_in_interval": 30, 00:05:33.972 "disable_chap": false, 00:05:33.972 "require_chap": false, 00:05:33.972 "mutual_chap": false, 00:05:33.972 "chap_group": 0, 00:05:33.972 "max_large_datain_per_connection": 64, 00:05:33.972 "max_r2t_per_connection": 4, 00:05:33.972 "pdu_pool_size": 36864, 00:05:33.972 "immediate_data_pool_size": 16384, 00:05:33.972 "data_out_pool_size": 2048 00:05:33.972 } 00:05:33.972 } 00:05:33.972 ] 00:05:33.972 } 00:05:33.972 ] 00:05:33.972 } 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71219 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71219 ']' 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71219 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71219 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.972 killing process with pid 71219 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71219' 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71219 00:05:33.972 18:00:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71219 00:05:34.231 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71242 00:05:34.231 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:34.231 18:00:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.494 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71242 00:05:39.494 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71242 ']' 00:05:39.494 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71242 00:05:39.494 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:39.494 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71242 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.495 killing process with pid 71242 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71242' 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71242 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71242 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.495 00:05:39.495 real 0m6.539s 00:05:39.495 user 0m6.236s 00:05:39.495 sys 0m0.482s 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.495 ************************************ 00:05:39.495 END TEST skip_rpc_with_json 00:05:39.495 ************************************ 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.495 18:00:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.495 18:00:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.495 18:00:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.495 18:00:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.495 ************************************ 00:05:39.495 START TEST skip_rpc_with_delay 00:05:39.495 ************************************ 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.495 [2024-12-13 18:00:13.803022] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.495 00:05:39.495 real 0m0.107s 00:05:39.495 user 0m0.060s 00:05:39.495 sys 0m0.046s 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.495 ************************************ 00:05:39.495 END TEST skip_rpc_with_delay 00:05:39.495 ************************************ 00:05:39.495 18:00:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.752 18:00:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.752 18:00:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.752 18:00:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.752 18:00:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.752 18:00:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.752 18:00:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.752 ************************************ 00:05:39.752 START TEST exit_on_failed_rpc_init 00:05:39.752 ************************************ 00:05:39.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.752 18:00:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:39.752 18:00:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71354 00:05:39.752 18:00:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71354 00:05:39.752 18:00:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71354 ']' 00:05:39.752 18:00:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.752 18:00:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.753 18:00:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.753 18:00:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.753 18:00:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.753 18:00:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.753 [2024-12-13 18:00:13.964301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:39.753 [2024-12-13 18:00:13.964519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71354 ] 00:05:39.753 [2024-12-13 18:00:14.108198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.753 [2024-12-13 18:00:14.127362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:40.686 18:00:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.686 [2024-12-13 18:00:14.881264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:40.686 [2024-12-13 18:00:14.881383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71366 ] 00:05:40.686 [2024-12-13 18:00:15.027942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.686 [2024-12-13 18:00:15.046958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.686 [2024-12-13 18:00:15.047183] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.686 [2024-12-13 18:00:15.047206] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.686 [2024-12-13 18:00:15.047219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71354 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71354 ']' 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71354 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71354 00:05:40.944 killing process with pid 71354 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71354' 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71354 00:05:40.944 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71354 00:05:41.201 00:05:41.201 real 0m1.487s 00:05:41.201 user 0m1.634s 00:05:41.201 sys 0m0.362s 00:05:41.201 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.201 18:00:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.201 ************************************ 00:05:41.201 END TEST exit_on_failed_rpc_init 00:05:41.201 ************************************ 00:05:41.201 18:00:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.201 ************************************ 00:05:41.201 END TEST skip_rpc 00:05:41.201 ************************************ 00:05:41.201 00:05:41.201 real 0m13.775s 00:05:41.201 user 0m12.946s 00:05:41.201 sys 0m1.358s 00:05:41.201 18:00:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.201 18:00:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.201 18:00:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:41.201 18:00:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.201 18:00:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.201 18:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.201 ************************************ 00:05:41.201 START TEST rpc_client 00:05:41.201 ************************************ 00:05:41.201 18:00:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:41.201 * Looking for test storage... 00:05:41.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:41.201 18:00:15 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.201 18:00:15 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.201 18:00:15 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.494 18:00:15 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.494 18:00:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:41.494 18:00:15 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.494 18:00:15 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.494 --rc genhtml_branch_coverage=1 00:05:41.494 --rc genhtml_function_coverage=1 00:05:41.494 --rc genhtml_legend=1 00:05:41.494 --rc geninfo_all_blocks=1 00:05:41.494 --rc geninfo_unexecuted_blocks=1 00:05:41.494 00:05:41.494 ' 00:05:41.494 18:00:15 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.494 --rc genhtml_branch_coverage=1 00:05:41.494 --rc genhtml_function_coverage=1 00:05:41.494 --rc genhtml_legend=1 00:05:41.494 --rc geninfo_all_blocks=1 00:05:41.494 --rc geninfo_unexecuted_blocks=1 00:05:41.494 00:05:41.494 ' 00:05:41.494 18:00:15 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.494 --rc genhtml_branch_coverage=1 00:05:41.494 --rc genhtml_function_coverage=1 00:05:41.494 --rc genhtml_legend=1 00:05:41.495 --rc geninfo_all_blocks=1 00:05:41.495 --rc geninfo_unexecuted_blocks=1 00:05:41.495 00:05:41.495 ' 00:05:41.495 18:00:15 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.495 --rc genhtml_branch_coverage=1 00:05:41.495 --rc genhtml_function_coverage=1 00:05:41.495 --rc genhtml_legend=1 00:05:41.495 --rc geninfo_all_blocks=1 00:05:41.495 --rc geninfo_unexecuted_blocks=1 00:05:41.495 00:05:41.495 ' 00:05:41.495 18:00:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:41.495 OK 00:05:41.495 18:00:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.495 00:05:41.495 real 0m0.191s 00:05:41.495 user 0m0.108s 00:05:41.495 sys 0m0.085s 00:05:41.495 18:00:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.495 18:00:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.495 ************************************ 00:05:41.495 END TEST rpc_client 00:05:41.495 ************************************ 00:05:41.495 18:00:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.495 18:00:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.495 18:00:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.495 18:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.495 ************************************ 00:05:41.495 START TEST json_config 00:05:41.495 ************************************ 00:05:41.495 18:00:15 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.495 18:00:15 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.495 18:00:15 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.495 18:00:15 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.495 18:00:15 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.495 18:00:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.495 18:00:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.495 18:00:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.495 18:00:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.495 18:00:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.495 18:00:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.495 18:00:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.495 18:00:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.495 18:00:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.495 18:00:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.495 18:00:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.495 18:00:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:41.495 18:00:15 json_config -- scripts/common.sh@345 -- # : 1 00:05:41.495 18:00:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.495 18:00:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.495 18:00:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:41.495 18:00:15 json_config -- scripts/common.sh@353 -- # local d=1 00:05:41.495 18:00:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.495 18:00:15 json_config -- scripts/common.sh@355 -- # echo 1 00:05:41.495 18:00:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.495 18:00:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:41.754 18:00:15 json_config -- scripts/common.sh@353 -- # local d=2 00:05:41.754 18:00:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.754 18:00:15 json_config -- scripts/common.sh@355 -- # echo 2 00:05:41.754 18:00:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.754 18:00:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.754 18:00:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.754 18:00:15 json_config -- scripts/common.sh@368 -- # return 0 00:05:41.754 18:00:15 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.754 18:00:15 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.754 --rc genhtml_branch_coverage=1 00:05:41.754 --rc genhtml_function_coverage=1 00:05:41.754 --rc genhtml_legend=1 00:05:41.754 --rc geninfo_all_blocks=1 00:05:41.754 --rc geninfo_unexecuted_blocks=1 00:05:41.754 00:05:41.754 ' 00:05:41.754 18:00:15 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.754 --rc genhtml_branch_coverage=1 00:05:41.754 --rc genhtml_function_coverage=1 00:05:41.754 --rc genhtml_legend=1 00:05:41.754 --rc geninfo_all_blocks=1 00:05:41.754 --rc geninfo_unexecuted_blocks=1 00:05:41.754 00:05:41.754 ' 00:05:41.754 18:00:15 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.754 --rc genhtml_branch_coverage=1 00:05:41.754 --rc genhtml_function_coverage=1 00:05:41.754 --rc genhtml_legend=1 00:05:41.754 --rc geninfo_all_blocks=1 00:05:41.754 --rc geninfo_unexecuted_blocks=1 00:05:41.754 00:05:41.754 ' 00:05:41.754 18:00:15 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.754 --rc genhtml_branch_coverage=1 00:05:41.754 --rc genhtml_function_coverage=1 00:05:41.754 --rc genhtml_legend=1 00:05:41.754 --rc geninfo_all_blocks=1 00:05:41.754 --rc geninfo_unexecuted_blocks=1 00:05:41.754 00:05:41.754 ' 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b5ed997-18b8-4232-b3a4-124f0355258f 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0b5ed997-18b8-4232-b3a4-124f0355258f 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.754 18:00:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.754 18:00:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.754 18:00:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.754 18:00:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.754 18:00:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.754 18:00:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.754 18:00:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.754 18:00:15 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.754 18:00:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@51 -- # : 0 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.754 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.754 18:00:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:41.754 WARNING: No tests are enabled so not running JSON configuration tests 00:05:41.754 18:00:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:41.754 00:05:41.754 real 0m0.137s 00:05:41.754 user 0m0.085s 00:05:41.754 sys 0m0.050s 00:05:41.754 18:00:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.754 18:00:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.754 ************************************ 00:05:41.754 END TEST json_config 00:05:41.754 ************************************ 00:05:41.754 18:00:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.754 18:00:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.754 18:00:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.754 18:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.754 ************************************ 00:05:41.754 START TEST json_config_extra_key 00:05:41.754 ************************************ 00:05:41.754 18:00:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.754 18:00:15 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.754 18:00:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.754 18:00:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.754 18:00:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:41.754 18:00:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.755 --rc genhtml_branch_coverage=1 00:05:41.755 --rc genhtml_function_coverage=1 00:05:41.755 --rc genhtml_legend=1 00:05:41.755 --rc geninfo_all_blocks=1 00:05:41.755 --rc geninfo_unexecuted_blocks=1 00:05:41.755 00:05:41.755 ' 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.755 --rc genhtml_branch_coverage=1 00:05:41.755 --rc genhtml_function_coverage=1 00:05:41.755 --rc genhtml_legend=1 00:05:41.755 --rc geninfo_all_blocks=1 00:05:41.755 --rc geninfo_unexecuted_blocks=1 00:05:41.755 00:05:41.755 ' 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.755 --rc genhtml_branch_coverage=1 00:05:41.755 --rc genhtml_function_coverage=1 00:05:41.755 --rc genhtml_legend=1 00:05:41.755 --rc geninfo_all_blocks=1 00:05:41.755 --rc geninfo_unexecuted_blocks=1 00:05:41.755 00:05:41.755 ' 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.755 --rc genhtml_branch_coverage=1 00:05:41.755 --rc genhtml_function_coverage=1 00:05:41.755 --rc genhtml_legend=1 00:05:41.755 --rc geninfo_all_blocks=1 00:05:41.755 --rc geninfo_unexecuted_blocks=1 00:05:41.755 00:05:41.755 ' 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b5ed997-18b8-4232-b3a4-124f0355258f 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0b5ed997-18b8-4232-b3a4-124f0355258f 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.755 18:00:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.755 18:00:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.755 18:00:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.755 18:00:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.755 18:00:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.755 18:00:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.755 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.755 18:00:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.755 INFO: launching applications... 00:05:41.755 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71549 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.755 Waiting for target to run... 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71549 /var/tmp/spdk_tgt.sock 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71549 ']' 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.755 18:00:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.755 18:00:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.014 [2024-12-13 18:00:16.131634] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:42.014 [2024-12-13 18:00:16.131893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71549 ] 00:05:42.272 [2024-12-13 18:00:16.432545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.272 [2024-12-13 18:00:16.443978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.838 18:00:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.838 18:00:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.838 00:05:42.838 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.838 INFO: shutting down applications... 00:05:42.838 18:00:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71549 ]] 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71549 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71549 00:05:42.838 18:00:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.096 18:00:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.096 18:00:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.096 18:00:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71549 00:05:43.096 18:00:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.096 18:00:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:43.096 18:00:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.096 18:00:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.096 SPDK target shutdown done 00:05:43.354 18:00:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:43.354 Success 00:05:43.354 00:05:43.354 real 0m1.555s 00:05:43.354 user 0m1.251s 00:05:43.354 sys 0m0.332s 00:05:43.354 18:00:17 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.354 ************************************ 00:05:43.354 END TEST json_config_extra_key 00:05:43.354 ************************************ 00:05:43.354 18:00:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 18:00:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.354 18:00:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.354 18:00:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.354 18:00:17 -- common/autotest_common.sh@10 -- # set +x 00:05:43.355 ************************************ 00:05:43.355 START TEST alias_rpc 00:05:43.355 ************************************ 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.355 * Looking for test storage... 00:05:43.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.355 18:00:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.355 --rc genhtml_branch_coverage=1 00:05:43.355 --rc genhtml_function_coverage=1 00:05:43.355 --rc genhtml_legend=1 00:05:43.355 --rc geninfo_all_blocks=1 00:05:43.355 --rc geninfo_unexecuted_blocks=1 00:05:43.355 00:05:43.355 ' 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.355 --rc genhtml_branch_coverage=1 00:05:43.355 --rc genhtml_function_coverage=1 00:05:43.355 --rc genhtml_legend=1 00:05:43.355 --rc geninfo_all_blocks=1 00:05:43.355 --rc geninfo_unexecuted_blocks=1 00:05:43.355 00:05:43.355 ' 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.355 --rc genhtml_branch_coverage=1 00:05:43.355 --rc genhtml_function_coverage=1 00:05:43.355 --rc genhtml_legend=1 00:05:43.355 --rc geninfo_all_blocks=1 00:05:43.355 --rc geninfo_unexecuted_blocks=1 00:05:43.355 00:05:43.355 ' 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.355 --rc genhtml_branch_coverage=1 00:05:43.355 --rc genhtml_function_coverage=1 00:05:43.355 --rc genhtml_legend=1 00:05:43.355 --rc geninfo_all_blocks=1 00:05:43.355 --rc geninfo_unexecuted_blocks=1 00:05:43.355 00:05:43.355 ' 00:05:43.355 18:00:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.355 18:00:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71622 00:05:43.355 18:00:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71622 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71622 ']' 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.355 18:00:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.355 18:00:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.355 [2024-12-13 18:00:17.713490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:43.355 [2024-12-13 18:00:17.713610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71622 ] 00:05:43.614 [2024-12-13 18:00:17.859467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.614 [2024-12-13 18:00:17.877548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.546 18:00:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.547 18:00:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:44.547 18:00:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71622 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71622 ']' 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71622 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71622 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.547 killing process with pid 71622 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71622' 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 71622 00:05:44.547 18:00:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 71622 00:05:44.804 00:05:44.804 real 0m1.542s 00:05:44.804 user 0m1.722s 00:05:44.804 sys 0m0.327s 00:05:44.804 18:00:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.804 ************************************ 00:05:44.804 END TEST alias_rpc 00:05:44.804 18:00:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.804 ************************************ 00:05:44.804 18:00:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:44.804 18:00:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.804 18:00:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.804 18:00:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.804 18:00:19 -- common/autotest_common.sh@10 -- # set +x 00:05:44.804 ************************************ 00:05:44.804 START TEST spdkcli_tcp 00:05:44.804 ************************************ 00:05:44.804 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.804 * Looking for test storage... 00:05:44.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:44.804 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.804 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.804 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.062 18:00:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.062 --rc genhtml_branch_coverage=1 00:05:45.062 --rc genhtml_function_coverage=1 00:05:45.062 --rc genhtml_legend=1 00:05:45.062 --rc geninfo_all_blocks=1 00:05:45.062 --rc geninfo_unexecuted_blocks=1 00:05:45.062 00:05:45.062 ' 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.062 --rc genhtml_branch_coverage=1 00:05:45.062 --rc genhtml_function_coverage=1 00:05:45.062 --rc genhtml_legend=1 00:05:45.062 --rc geninfo_all_blocks=1 00:05:45.062 --rc geninfo_unexecuted_blocks=1 00:05:45.062 00:05:45.062 ' 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.062 --rc genhtml_branch_coverage=1 00:05:45.062 --rc genhtml_function_coverage=1 00:05:45.062 --rc genhtml_legend=1 00:05:45.062 --rc geninfo_all_blocks=1 00:05:45.062 --rc geninfo_unexecuted_blocks=1 00:05:45.062 00:05:45.062 ' 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.062 --rc genhtml_branch_coverage=1 00:05:45.062 --rc genhtml_function_coverage=1 00:05:45.062 --rc genhtml_legend=1 00:05:45.062 --rc geninfo_all_blocks=1 00:05:45.062 --rc geninfo_unexecuted_blocks=1 00:05:45.062 00:05:45.062 ' 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71702 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71702 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71702 ']' 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.062 18:00:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:45.062 18:00:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.062 [2024-12-13 18:00:19.309817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:45.062 [2024-12-13 18:00:19.309933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71702 ] 00:05:45.320 [2024-12-13 18:00:19.452853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.320 [2024-12-13 18:00:19.472061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.320 [2024-12-13 18:00:19.472099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.886 18:00:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.886 18:00:20 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:45.886 18:00:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71719 00:05:45.886 18:00:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.886 18:00:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.144 [ 00:05:46.144 "bdev_malloc_delete", 00:05:46.144 "bdev_malloc_create", 00:05:46.144 "bdev_null_resize", 00:05:46.144 "bdev_null_delete", 00:05:46.144 "bdev_null_create", 00:05:46.144 "bdev_nvme_cuse_unregister", 00:05:46.144 "bdev_nvme_cuse_register", 00:05:46.144 "bdev_opal_new_user", 00:05:46.144 "bdev_opal_set_lock_state", 00:05:46.144 "bdev_opal_delete", 00:05:46.144 "bdev_opal_get_info", 00:05:46.144 "bdev_opal_create", 00:05:46.144 "bdev_nvme_opal_revert", 00:05:46.144 "bdev_nvme_opal_init", 00:05:46.144 "bdev_nvme_send_cmd", 00:05:46.144 "bdev_nvme_set_keys", 00:05:46.144 "bdev_nvme_get_path_iostat", 00:05:46.144 "bdev_nvme_get_mdns_discovery_info", 00:05:46.144 "bdev_nvme_stop_mdns_discovery", 00:05:46.144 "bdev_nvme_start_mdns_discovery", 00:05:46.144 "bdev_nvme_set_multipath_policy", 00:05:46.144 "bdev_nvme_set_preferred_path", 00:05:46.144 "bdev_nvme_get_io_paths", 00:05:46.144 "bdev_nvme_remove_error_injection", 00:05:46.144 "bdev_nvme_add_error_injection", 00:05:46.144 "bdev_nvme_get_discovery_info", 00:05:46.144 "bdev_nvme_stop_discovery", 00:05:46.144 "bdev_nvme_start_discovery", 00:05:46.144 "bdev_nvme_get_controller_health_info", 00:05:46.144 "bdev_nvme_disable_controller", 00:05:46.144 "bdev_nvme_enable_controller", 00:05:46.144 "bdev_nvme_reset_controller", 00:05:46.144 "bdev_nvme_get_transport_statistics", 00:05:46.144 "bdev_nvme_apply_firmware", 00:05:46.144 "bdev_nvme_detach_controller", 00:05:46.144 "bdev_nvme_get_controllers", 00:05:46.144 "bdev_nvme_attach_controller", 00:05:46.144 "bdev_nvme_set_hotplug", 00:05:46.144 "bdev_nvme_set_options", 00:05:46.144 "bdev_passthru_delete", 00:05:46.144 "bdev_passthru_create", 00:05:46.144 "bdev_lvol_set_parent_bdev", 00:05:46.144 "bdev_lvol_set_parent", 00:05:46.144 "bdev_lvol_check_shallow_copy", 00:05:46.144 "bdev_lvol_start_shallow_copy", 00:05:46.144 "bdev_lvol_grow_lvstore", 00:05:46.144 "bdev_lvol_get_lvols", 00:05:46.144 "bdev_lvol_get_lvstores", 00:05:46.144 "bdev_lvol_delete", 00:05:46.144 "bdev_lvol_set_read_only", 00:05:46.144 "bdev_lvol_resize", 00:05:46.144 "bdev_lvol_decouple_parent", 00:05:46.144 "bdev_lvol_inflate", 00:05:46.144 "bdev_lvol_rename", 00:05:46.144 "bdev_lvol_clone_bdev", 00:05:46.144 "bdev_lvol_clone", 00:05:46.144 "bdev_lvol_snapshot", 00:05:46.144 "bdev_lvol_create", 00:05:46.144 "bdev_lvol_delete_lvstore", 00:05:46.144 "bdev_lvol_rename_lvstore", 00:05:46.144 "bdev_lvol_create_lvstore", 00:05:46.144 "bdev_raid_set_options", 00:05:46.144 "bdev_raid_remove_base_bdev", 00:05:46.144 "bdev_raid_add_base_bdev", 00:05:46.144 "bdev_raid_delete", 00:05:46.144 "bdev_raid_create", 00:05:46.144 "bdev_raid_get_bdevs", 00:05:46.144 "bdev_error_inject_error", 00:05:46.144 "bdev_error_delete", 00:05:46.144 "bdev_error_create", 00:05:46.144 "bdev_split_delete", 00:05:46.144 "bdev_split_create", 00:05:46.144 "bdev_delay_delete", 00:05:46.144 "bdev_delay_create", 00:05:46.144 "bdev_delay_update_latency", 00:05:46.144 "bdev_zone_block_delete", 00:05:46.144 "bdev_zone_block_create", 00:05:46.144 "blobfs_create", 00:05:46.144 "blobfs_detect", 00:05:46.144 "blobfs_set_cache_size", 00:05:46.144 "bdev_xnvme_delete", 00:05:46.144 "bdev_xnvme_create", 00:05:46.144 "bdev_aio_delete", 00:05:46.144 "bdev_aio_rescan", 00:05:46.144 "bdev_aio_create", 00:05:46.144 "bdev_ftl_set_property", 00:05:46.144 "bdev_ftl_get_properties", 00:05:46.144 "bdev_ftl_get_stats", 00:05:46.144 "bdev_ftl_unmap", 00:05:46.144 "bdev_ftl_unload", 00:05:46.144 "bdev_ftl_delete", 00:05:46.144 "bdev_ftl_load", 00:05:46.144 "bdev_ftl_create", 00:05:46.144 "bdev_virtio_attach_controller", 00:05:46.144 "bdev_virtio_scsi_get_devices", 00:05:46.144 "bdev_virtio_detach_controller", 00:05:46.144 "bdev_virtio_blk_set_hotplug", 00:05:46.144 "bdev_iscsi_delete", 00:05:46.144 "bdev_iscsi_create", 00:05:46.144 "bdev_iscsi_set_options", 00:05:46.144 "accel_error_inject_error", 00:05:46.144 "ioat_scan_accel_module", 00:05:46.144 "dsa_scan_accel_module", 00:05:46.144 "iaa_scan_accel_module", 00:05:46.144 "keyring_file_remove_key", 00:05:46.144 "keyring_file_add_key", 00:05:46.144 "keyring_linux_set_options", 00:05:46.144 "fsdev_aio_delete", 00:05:46.144 "fsdev_aio_create", 00:05:46.144 "iscsi_get_histogram", 00:05:46.144 "iscsi_enable_histogram", 00:05:46.144 "iscsi_set_options", 00:05:46.144 "iscsi_get_auth_groups", 00:05:46.144 "iscsi_auth_group_remove_secret", 00:05:46.144 "iscsi_auth_group_add_secret", 00:05:46.144 "iscsi_delete_auth_group", 00:05:46.144 "iscsi_create_auth_group", 00:05:46.144 "iscsi_set_discovery_auth", 00:05:46.144 "iscsi_get_options", 00:05:46.144 "iscsi_target_node_request_logout", 00:05:46.144 "iscsi_target_node_set_redirect", 00:05:46.144 "iscsi_target_node_set_auth", 00:05:46.144 "iscsi_target_node_add_lun", 00:05:46.144 "iscsi_get_stats", 00:05:46.144 "iscsi_get_connections", 00:05:46.144 "iscsi_portal_group_set_auth", 00:05:46.144 "iscsi_start_portal_group", 00:05:46.144 "iscsi_delete_portal_group", 00:05:46.144 "iscsi_create_portal_group", 00:05:46.144 "iscsi_get_portal_groups", 00:05:46.144 "iscsi_delete_target_node", 00:05:46.144 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.144 "iscsi_target_node_add_pg_ig_maps", 00:05:46.144 "iscsi_create_target_node", 00:05:46.144 "iscsi_get_target_nodes", 00:05:46.144 "iscsi_delete_initiator_group", 00:05:46.144 "iscsi_initiator_group_remove_initiators", 00:05:46.144 "iscsi_initiator_group_add_initiators", 00:05:46.144 "iscsi_create_initiator_group", 00:05:46.144 "iscsi_get_initiator_groups", 00:05:46.144 "nvmf_set_crdt", 00:05:46.144 "nvmf_set_config", 00:05:46.144 "nvmf_set_max_subsystems", 00:05:46.144 "nvmf_stop_mdns_prr", 00:05:46.144 "nvmf_publish_mdns_prr", 00:05:46.144 "nvmf_subsystem_get_listeners", 00:05:46.144 "nvmf_subsystem_get_qpairs", 00:05:46.144 "nvmf_subsystem_get_controllers", 00:05:46.144 "nvmf_get_stats", 00:05:46.144 "nvmf_get_transports", 00:05:46.144 "nvmf_create_transport", 00:05:46.144 "nvmf_get_targets", 00:05:46.144 "nvmf_delete_target", 00:05:46.144 "nvmf_create_target", 00:05:46.144 "nvmf_subsystem_allow_any_host", 00:05:46.144 "nvmf_subsystem_set_keys", 00:05:46.144 "nvmf_subsystem_remove_host", 00:05:46.144 "nvmf_subsystem_add_host", 00:05:46.144 "nvmf_ns_remove_host", 00:05:46.144 "nvmf_ns_add_host", 00:05:46.144 "nvmf_subsystem_remove_ns", 00:05:46.144 "nvmf_subsystem_set_ns_ana_group", 00:05:46.144 "nvmf_subsystem_add_ns", 00:05:46.144 "nvmf_subsystem_listener_set_ana_state", 00:05:46.144 "nvmf_discovery_get_referrals", 00:05:46.144 "nvmf_discovery_remove_referral", 00:05:46.144 "nvmf_discovery_add_referral", 00:05:46.144 "nvmf_subsystem_remove_listener", 00:05:46.144 "nvmf_subsystem_add_listener", 00:05:46.144 "nvmf_delete_subsystem", 00:05:46.144 "nvmf_create_subsystem", 00:05:46.144 "nvmf_get_subsystems", 00:05:46.144 "env_dpdk_get_mem_stats", 00:05:46.144 "nbd_get_disks", 00:05:46.144 "nbd_stop_disk", 00:05:46.144 "nbd_start_disk", 00:05:46.144 "ublk_recover_disk", 00:05:46.144 "ublk_get_disks", 00:05:46.144 "ublk_stop_disk", 00:05:46.144 "ublk_start_disk", 00:05:46.144 "ublk_destroy_target", 00:05:46.144 "ublk_create_target", 00:05:46.144 "virtio_blk_create_transport", 00:05:46.144 "virtio_blk_get_transports", 00:05:46.145 "vhost_controller_set_coalescing", 00:05:46.145 "vhost_get_controllers", 00:05:46.145 "vhost_delete_controller", 00:05:46.145 "vhost_create_blk_controller", 00:05:46.145 "vhost_scsi_controller_remove_target", 00:05:46.145 "vhost_scsi_controller_add_target", 00:05:46.145 "vhost_start_scsi_controller", 00:05:46.145 "vhost_create_scsi_controller", 00:05:46.145 "thread_set_cpumask", 00:05:46.145 "scheduler_set_options", 00:05:46.145 "framework_get_governor", 00:05:46.145 "framework_get_scheduler", 00:05:46.145 "framework_set_scheduler", 00:05:46.145 "framework_get_reactors", 00:05:46.145 "thread_get_io_channels", 00:05:46.145 "thread_get_pollers", 00:05:46.145 "thread_get_stats", 00:05:46.145 "framework_monitor_context_switch", 00:05:46.145 "spdk_kill_instance", 00:05:46.145 "log_enable_timestamps", 00:05:46.145 "log_get_flags", 00:05:46.145 "log_clear_flag", 00:05:46.145 "log_set_flag", 00:05:46.145 "log_get_level", 00:05:46.145 "log_set_level", 00:05:46.145 "log_get_print_level", 00:05:46.145 "log_set_print_level", 00:05:46.145 "framework_enable_cpumask_locks", 00:05:46.145 "framework_disable_cpumask_locks", 00:05:46.145 "framework_wait_init", 00:05:46.145 "framework_start_init", 00:05:46.145 "scsi_get_devices", 00:05:46.145 "bdev_get_histogram", 00:05:46.145 "bdev_enable_histogram", 00:05:46.145 "bdev_set_qos_limit", 00:05:46.145 "bdev_set_qd_sampling_period", 00:05:46.145 "bdev_get_bdevs", 00:05:46.145 "bdev_reset_iostat", 00:05:46.145 "bdev_get_iostat", 00:05:46.145 "bdev_examine", 00:05:46.145 "bdev_wait_for_examine", 00:05:46.145 "bdev_set_options", 00:05:46.145 "accel_get_stats", 00:05:46.145 "accel_set_options", 00:05:46.145 "accel_set_driver", 00:05:46.145 "accel_crypto_key_destroy", 00:05:46.145 "accel_crypto_keys_get", 00:05:46.145 "accel_crypto_key_create", 00:05:46.145 "accel_assign_opc", 00:05:46.145 "accel_get_module_info", 00:05:46.145 "accel_get_opc_assignments", 00:05:46.145 "vmd_rescan", 00:05:46.145 "vmd_remove_device", 00:05:46.145 "vmd_enable", 00:05:46.145 "sock_get_default_impl", 00:05:46.145 "sock_set_default_impl", 00:05:46.145 "sock_impl_set_options", 00:05:46.145 "sock_impl_get_options", 00:05:46.145 "iobuf_get_stats", 00:05:46.145 "iobuf_set_options", 00:05:46.145 "keyring_get_keys", 00:05:46.145 "framework_get_pci_devices", 00:05:46.145 "framework_get_config", 00:05:46.145 "framework_get_subsystems", 00:05:46.145 "fsdev_set_opts", 00:05:46.145 "fsdev_get_opts", 00:05:46.145 "trace_get_info", 00:05:46.145 "trace_get_tpoint_group_mask", 00:05:46.145 "trace_disable_tpoint_group", 00:05:46.145 "trace_enable_tpoint_group", 00:05:46.145 "trace_clear_tpoint_mask", 00:05:46.145 "trace_set_tpoint_mask", 00:05:46.145 "notify_get_notifications", 00:05:46.145 "notify_get_types", 00:05:46.145 "spdk_get_version", 00:05:46.145 "rpc_get_methods" 00:05:46.145 ] 00:05:46.145 18:00:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.145 18:00:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.145 18:00:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71702 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71702 ']' 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71702 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71702 00:05:46.145 killing process with pid 71702 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71702' 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71702 00:05:46.145 18:00:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71702 00:05:46.403 00:05:46.403 real 0m1.559s 00:05:46.403 user 0m2.831s 00:05:46.403 sys 0m0.356s 00:05:46.403 18:00:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.403 ************************************ 00:05:46.403 END TEST spdkcli_tcp 00:05:46.403 18:00:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.403 ************************************ 00:05:46.403 18:00:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.403 18:00:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.403 18:00:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.403 18:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:46.403 ************************************ 00:05:46.403 START TEST dpdk_mem_utility 00:05:46.403 ************************************ 00:05:46.403 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.403 * Looking for test storage... 00:05:46.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.403 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.661 18:00:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.661 --rc genhtml_branch_coverage=1 00:05:46.661 --rc genhtml_function_coverage=1 00:05:46.661 --rc genhtml_legend=1 00:05:46.661 --rc geninfo_all_blocks=1 00:05:46.661 --rc geninfo_unexecuted_blocks=1 00:05:46.661 00:05:46.661 ' 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.661 --rc genhtml_branch_coverage=1 00:05:46.661 --rc genhtml_function_coverage=1 00:05:46.661 --rc genhtml_legend=1 00:05:46.661 --rc geninfo_all_blocks=1 00:05:46.661 --rc geninfo_unexecuted_blocks=1 00:05:46.661 00:05:46.661 ' 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.661 --rc genhtml_branch_coverage=1 00:05:46.661 --rc genhtml_function_coverage=1 00:05:46.661 --rc genhtml_legend=1 00:05:46.661 --rc geninfo_all_blocks=1 00:05:46.661 --rc geninfo_unexecuted_blocks=1 00:05:46.661 00:05:46.661 ' 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.661 --rc genhtml_branch_coverage=1 00:05:46.661 --rc genhtml_function_coverage=1 00:05:46.661 --rc genhtml_legend=1 00:05:46.661 --rc geninfo_all_blocks=1 00:05:46.661 --rc geninfo_unexecuted_blocks=1 00:05:46.661 00:05:46.661 ' 00:05:46.661 18:00:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.661 18:00:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71796 00:05:46.661 18:00:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71796 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71796 ']' 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.661 18:00:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.661 18:00:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.661 [2024-12-13 18:00:20.928469] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:46.661 [2024-12-13 18:00:20.929062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71796 ] 00:05:46.919 [2024-12-13 18:00:21.070110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.919 [2024-12-13 18:00:21.089401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.484 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.484 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:47.484 18:00:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.484 18:00:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.484 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.484 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.484 { 00:05:47.484 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.484 } 00:05:47.484 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.484 18:00:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.484 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:47.484 1 heaps totaling size 818.000000 MiB 00:05:47.484 size: 818.000000 MiB heap id: 0 00:05:47.484 end heaps---------- 00:05:47.484 9 mempools totaling size 603.782043 MiB 00:05:47.484 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.484 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.484 size: 100.555481 MiB name: bdev_io_71796 00:05:47.484 size: 50.003479 MiB name: msgpool_71796 00:05:47.484 size: 36.509338 MiB name: fsdev_io_71796 00:05:47.484 size: 21.763794 MiB name: PDU_Pool 00:05:47.484 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.484 size: 4.133484 MiB name: evtpool_71796 00:05:47.484 size: 0.026123 MiB name: Session_Pool 00:05:47.484 end mempools------- 00:05:47.484 6 memzones totaling size 4.142822 MiB 00:05:47.484 size: 1.000366 MiB name: RG_ring_0_71796 00:05:47.484 size: 1.000366 MiB name: RG_ring_1_71796 00:05:47.484 size: 1.000366 MiB name: RG_ring_4_71796 00:05:47.484 size: 1.000366 MiB name: RG_ring_5_71796 00:05:47.484 size: 0.125366 MiB name: RG_ring_2_71796 00:05:47.485 size: 0.015991 MiB name: RG_ring_3_71796 00:05:47.485 end memzones------- 00:05:47.485 18:00:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.744 heap id: 0 total size: 818.000000 MiB number of busy elements: 316 number of free elements: 15 00:05:47.744 list of free elements. size: 10.802673 MiB 00:05:47.744 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:47.744 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:47.744 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:47.744 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:47.744 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:47.744 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:47.744 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:47.744 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:47.744 element at address: 0x20001ae00000 with size: 0.567871 MiB 00:05:47.744 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:47.744 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:47.744 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:47.744 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:47.744 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:47.744 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:47.744 list of standard malloc elements. size: 199.268433 MiB 00:05:47.744 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:47.744 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:47.744 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.744 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:47.744 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:47.744 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.744 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:47.744 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.744 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:47.744 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:47.744 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:47.745 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:47.746 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:47.746 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:47.746 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:47.746 list of memzone associated elements. size: 607.928894 MiB 00:05:47.746 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:47.746 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.746 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:47.746 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.746 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:47.746 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71796_0 00:05:47.746 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:47.746 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71796_0 00:05:47.746 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:47.746 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71796_0 00:05:47.746 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:47.746 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.746 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:47.746 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.746 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:47.746 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71796_0 00:05:47.746 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:47.746 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71796 00:05:47.746 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.746 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71796 00:05:47.746 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:47.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.746 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:47.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.746 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:47.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.746 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:47.746 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.746 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:47.747 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71796 00:05:47.747 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:47.747 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71796 00:05:47.747 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:47.747 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71796 00:05:47.747 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:47.747 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71796 00:05:47.747 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:47.747 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71796 00:05:47.747 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:47.747 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71796 00:05:47.747 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:47.747 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.747 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:47.747 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.747 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:47.747 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.747 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:47.747 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71796 00:05:47.747 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:47.747 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71796 00:05:47.747 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:47.747 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.747 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:47.747 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.747 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:47.747 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71796 00:05:47.747 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:47.747 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.747 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:47.747 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71796 00:05:47.747 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:47.747 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71796 00:05:47.747 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:47.747 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71796 00:05:47.747 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:47.747 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.747 18:00:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.747 18:00:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71796 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71796 ']' 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71796 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71796 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71796' 00:05:47.747 killing process with pid 71796 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71796 00:05:47.747 18:00:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71796 00:05:48.006 00:05:48.006 real 0m1.447s 00:05:48.006 user 0m1.521s 00:05:48.006 sys 0m0.350s 00:05:48.006 18:00:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.006 18:00:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.006 ************************************ 00:05:48.006 END TEST dpdk_mem_utility 00:05:48.006 ************************************ 00:05:48.006 18:00:22 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.006 18:00:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.006 18:00:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.006 18:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:48.006 ************************************ 00:05:48.006 START TEST event 00:05:48.006 ************************************ 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.006 * Looking for test storage... 00:05:48.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.006 18:00:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.006 18:00:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.006 18:00:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.006 18:00:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.006 18:00:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.006 18:00:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.006 18:00:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.006 18:00:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.006 18:00:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.006 18:00:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.006 18:00:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.006 18:00:22 event -- scripts/common.sh@344 -- # case "$op" in 00:05:48.006 18:00:22 event -- scripts/common.sh@345 -- # : 1 00:05:48.006 18:00:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.006 18:00:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.006 18:00:22 event -- scripts/common.sh@365 -- # decimal 1 00:05:48.006 18:00:22 event -- scripts/common.sh@353 -- # local d=1 00:05:48.006 18:00:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.006 18:00:22 event -- scripts/common.sh@355 -- # echo 1 00:05:48.006 18:00:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.006 18:00:22 event -- scripts/common.sh@366 -- # decimal 2 00:05:48.006 18:00:22 event -- scripts/common.sh@353 -- # local d=2 00:05:48.006 18:00:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.006 18:00:22 event -- scripts/common.sh@355 -- # echo 2 00:05:48.006 18:00:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.006 18:00:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.006 18:00:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.006 18:00:22 event -- scripts/common.sh@368 -- # return 0 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.006 --rc genhtml_branch_coverage=1 00:05:48.006 --rc genhtml_function_coverage=1 00:05:48.006 --rc genhtml_legend=1 00:05:48.006 --rc geninfo_all_blocks=1 00:05:48.006 --rc geninfo_unexecuted_blocks=1 00:05:48.006 00:05:48.006 ' 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.006 --rc genhtml_branch_coverage=1 00:05:48.006 --rc genhtml_function_coverage=1 00:05:48.006 --rc genhtml_legend=1 00:05:48.006 --rc geninfo_all_blocks=1 00:05:48.006 --rc geninfo_unexecuted_blocks=1 00:05:48.006 00:05:48.006 ' 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.006 --rc genhtml_branch_coverage=1 00:05:48.006 --rc genhtml_function_coverage=1 00:05:48.006 --rc genhtml_legend=1 00:05:48.006 --rc geninfo_all_blocks=1 00:05:48.006 --rc geninfo_unexecuted_blocks=1 00:05:48.006 00:05:48.006 ' 00:05:48.006 18:00:22 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.006 --rc genhtml_branch_coverage=1 00:05:48.006 --rc genhtml_function_coverage=1 00:05:48.006 --rc genhtml_legend=1 00:05:48.006 --rc geninfo_all_blocks=1 00:05:48.006 --rc geninfo_unexecuted_blocks=1 00:05:48.006 00:05:48.007 ' 00:05:48.007 18:00:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.007 18:00:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.007 18:00:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.007 18:00:22 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:48.007 18:00:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.007 18:00:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.007 ************************************ 00:05:48.007 START TEST event_perf 00:05:48.007 ************************************ 00:05:48.007 18:00:22 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.007 Running I/O for 1 seconds...[2024-12-13 18:00:22.365720] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:48.007 [2024-12-13 18:00:22.366062] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71877 ] 00:05:48.268 [2024-12-13 18:00:22.510536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.268 [2024-12-13 18:00:22.531708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.268 [2024-12-13 18:00:22.531990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.268 [2024-12-13 18:00:22.532051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.268 Running I/O for 1 seconds...[2024-12-13 18:00:22.532085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.209 00:05:49.209 lcore 0: 189490 00:05:49.209 lcore 1: 189487 00:05:49.209 lcore 2: 189487 00:05:49.209 lcore 3: 189488 00:05:49.209 done. 00:05:49.209 00:05:49.209 real 0m1.228s 00:05:49.209 user 0m4.057s 00:05:49.209 sys 0m0.055s 00:05:49.209 ************************************ 00:05:49.209 END TEST event_perf 00:05:49.209 ************************************ 00:05:49.209 18:00:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.209 18:00:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.468 18:00:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:49.468 18:00:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:49.468 18:00:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.468 18:00:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.468 ************************************ 00:05:49.468 START TEST event_reactor 00:05:49.468 ************************************ 00:05:49.468 18:00:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:49.468 [2024-12-13 18:00:23.635091] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:49.468 [2024-12-13 18:00:23.635199] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71916 ] 00:05:49.468 [2024-12-13 18:00:23.779629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.468 [2024-12-13 18:00:23.797390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.863 test_start 00:05:50.863 oneshot 00:05:50.863 tick 100 00:05:50.863 tick 100 00:05:50.863 tick 250 00:05:50.863 tick 100 00:05:50.863 tick 100 00:05:50.863 tick 250 00:05:50.863 tick 100 00:05:50.863 tick 500 00:05:50.863 tick 100 00:05:50.863 tick 100 00:05:50.863 tick 250 00:05:50.863 tick 100 00:05:50.863 tick 100 00:05:50.863 test_end 00:05:50.863 00:05:50.863 real 0m1.229s 00:05:50.863 user 0m1.075s 00:05:50.863 sys 0m0.047s 00:05:50.863 ************************************ 00:05:50.863 END TEST event_reactor 00:05:50.863 ************************************ 00:05:50.863 18:00:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.863 18:00:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 18:00:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.863 18:00:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.863 18:00:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.863 18:00:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.863 ************************************ 00:05:50.864 START TEST event_reactor_perf 00:05:50.864 ************************************ 00:05:50.864 18:00:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.864 [2024-12-13 18:00:24.909186] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:50.864 [2024-12-13 18:00:24.909492] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71947 ] 00:05:50.864 [2024-12-13 18:00:25.049155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.864 [2024-12-13 18:00:25.071445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.805 test_start 00:05:51.805 test_end 00:05:51.805 Performance: 313137 events per second 00:05:51.805 00:05:51.805 real 0m1.223s 00:05:51.805 user 0m1.065s 00:05:51.805 sys 0m0.052s 00:05:51.805 ************************************ 00:05:51.805 END TEST event_reactor_perf 00:05:51.805 ************************************ 00:05:51.805 18:00:26 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.805 18:00:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.805 18:00:26 event -- event/event.sh@49 -- # uname -s 00:05:51.805 18:00:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.805 18:00:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:51.805 18:00:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.805 18:00:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.805 18:00:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.805 ************************************ 00:05:51.805 START TEST event_scheduler 00:05:51.805 ************************************ 00:05:51.805 18:00:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.067 * Looking for test storage... 00:05:52.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:52.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.067 18:00:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.067 --rc genhtml_branch_coverage=1 00:05:52.067 --rc genhtml_function_coverage=1 00:05:52.067 --rc genhtml_legend=1 00:05:52.067 --rc geninfo_all_blocks=1 00:05:52.067 --rc geninfo_unexecuted_blocks=1 00:05:52.067 00:05:52.067 ' 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.067 --rc genhtml_branch_coverage=1 00:05:52.067 --rc genhtml_function_coverage=1 00:05:52.067 --rc genhtml_legend=1 00:05:52.067 --rc geninfo_all_blocks=1 00:05:52.067 --rc geninfo_unexecuted_blocks=1 00:05:52.067 00:05:52.067 ' 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.067 --rc genhtml_branch_coverage=1 00:05:52.067 --rc genhtml_function_coverage=1 00:05:52.067 --rc genhtml_legend=1 00:05:52.067 --rc geninfo_all_blocks=1 00:05:52.067 --rc geninfo_unexecuted_blocks=1 00:05:52.067 00:05:52.067 ' 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.067 --rc genhtml_branch_coverage=1 00:05:52.067 --rc genhtml_function_coverage=1 00:05:52.067 --rc genhtml_legend=1 00:05:52.067 --rc geninfo_all_blocks=1 00:05:52.067 --rc geninfo_unexecuted_blocks=1 00:05:52.067 00:05:52.067 ' 00:05:52.067 18:00:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.067 18:00:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72018 00:05:52.067 18:00:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.067 18:00:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72018 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72018 ']' 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.067 18:00:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.067 18:00:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.067 [2024-12-13 18:00:26.371171] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:52.067 [2024-12-13 18:00:26.371316] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72018 ] 00:05:52.327 [2024-12-13 18:00:26.517344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.327 [2024-12-13 18:00:26.545137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.327 [2024-12-13 18:00:26.545587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.327 [2024-12-13 18:00:26.545636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.327 [2024-12-13 18:00:26.545716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.895 18:00:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.895 18:00:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:52.895 18:00:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.895 18:00:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.895 18:00:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.895 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.895 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.895 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.895 POWER: Cannot set governor of lcore 0 to performance 00:05:52.895 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.895 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.895 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:52.895 POWER: Unable to set Power Management Environment for lcore 0 00:05:52.895 [2024-12-13 18:00:27.227467] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:52.895 [2024-12-13 18:00:27.227486] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:52.895 [2024-12-13 18:00:27.227504] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:52.895 [2024-12-13 18:00:27.227519] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.895 [2024-12-13 18:00:27.227527] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.895 [2024-12-13 18:00:27.227535] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.895 18:00:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.895 18:00:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.895 18:00:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.895 18:00:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 [2024-12-13 18:00:27.283082] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.154 18:00:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.154 18:00:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.154 18:00:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 ************************************ 00:05:53.154 START TEST scheduler_create_thread 00:05:53.154 ************************************ 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 2 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 3 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 4 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 5 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 6 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 7 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 8 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.154 9 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:53.154 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.155 10 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.155 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.740 ************************************ 00:05:53.740 END TEST scheduler_create_thread 00:05:53.740 ************************************ 00:05:53.740 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.740 00:05:53.740 real 0m0.588s 00:05:53.740 user 0m0.013s 00:05:53.740 sys 0m0.004s 00:05:53.740 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.740 18:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.740 18:00:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.740 18:00:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72018 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72018 ']' 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72018 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72018 00:05:53.740 killing process with pid 72018 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72018' 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72018 00:05:53.740 18:00:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72018 00:05:54.001 [2024-12-13 18:00:28.360835] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.263 ************************************ 00:05:54.263 END TEST event_scheduler 00:05:54.263 ************************************ 00:05:54.263 00:05:54.263 real 0m2.327s 00:05:54.263 user 0m4.597s 00:05:54.263 sys 0m0.327s 00:05:54.263 18:00:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.263 18:00:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.263 18:00:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.263 18:00:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.263 18:00:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.263 18:00:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.263 18:00:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.263 ************************************ 00:05:54.263 START TEST app_repeat 00:05:54.263 ************************************ 00:05:54.263 18:00:28 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.263 Process app_repeat pid: 72091 00:05:54.263 spdk_app_start Round 0 00:05:54.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72091 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72091' 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.263 18:00:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72091 /var/tmp/spdk-nbd.sock 00:05:54.263 18:00:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72091 ']' 00:05:54.263 18:00:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.263 18:00:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.263 18:00:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.263 18:00:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.263 18:00:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.263 [2024-12-13 18:00:28.575416] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:54.263 [2024-12-13 18:00:28.575528] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72091 ] 00:05:54.526 [2024-12-13 18:00:28.721406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.526 [2024-12-13 18:00:28.740501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.526 [2024-12-13 18:00:28.740506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.095 18:00:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.095 18:00:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.095 18:00:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.357 Malloc0 00:05:55.357 18:00:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.617 Malloc1 00:05:55.617 18:00:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.617 18:00:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.877 /dev/nbd0 00:05:55.877 18:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.877 18:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.877 1+0 records in 00:05:55.877 1+0 records out 00:05:55.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699643 s, 5.9 MB/s 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.877 18:00:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.877 18:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.877 18:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.877 18:00:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.138 /dev/nbd1 00:05:56.138 18:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.138 18:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.138 1+0 records in 00:05:56.138 1+0 records out 00:05:56.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211579 s, 19.4 MB/s 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.138 18:00:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.138 18:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.138 18:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.138 18:00:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.138 18:00:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.138 18:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.399 { 00:05:56.399 "nbd_device": "/dev/nbd0", 00:05:56.399 "bdev_name": "Malloc0" 00:05:56.399 }, 00:05:56.399 { 00:05:56.399 "nbd_device": "/dev/nbd1", 00:05:56.399 "bdev_name": "Malloc1" 00:05:56.399 } 00:05:56.399 ]' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.399 { 00:05:56.399 "nbd_device": "/dev/nbd0", 00:05:56.399 "bdev_name": "Malloc0" 00:05:56.399 }, 00:05:56.399 { 00:05:56.399 "nbd_device": "/dev/nbd1", 00:05:56.399 "bdev_name": "Malloc1" 00:05:56.399 } 00:05:56.399 ]' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.399 /dev/nbd1' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.399 /dev/nbd1' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.399 256+0 records in 00:05:56.399 256+0 records out 00:05:56.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665496 s, 158 MB/s 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.399 256+0 records in 00:05:56.399 256+0 records out 00:05:56.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165006 s, 63.5 MB/s 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.399 256+0 records in 00:05:56.399 256+0 records out 00:05:56.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019275 s, 54.4 MB/s 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.399 18:00:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.400 18:00:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.400 18:00:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.400 18:00:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.400 18:00:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.400 18:00:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.661 18:00:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.923 18:00:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.184 18:00:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.184 18:00:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.184 18:00:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.443 [2024-12-13 18:00:31.638331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.443 [2024-12-13 18:00:31.655944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.443 [2024-12-13 18:00:31.656046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.443 [2024-12-13 18:00:31.686769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.443 [2024-12-13 18:00:31.686826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.739 spdk_app_start Round 1 00:06:00.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.739 18:00:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.739 18:00:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.739 18:00:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72091 /var/tmp/spdk-nbd.sock 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72091 ']' 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.739 18:00:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.739 18:00:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.739 Malloc0 00:06:00.739 18:00:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.998 Malloc1 00:06:00.998 18:00:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.998 18:00:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.259 /dev/nbd0 00:06:01.259 18:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.259 18:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.259 1+0 records in 00:06:01.259 1+0 records out 00:06:01.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205051 s, 20.0 MB/s 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.259 18:00:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.259 18:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.259 18:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.259 18:00:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.259 /dev/nbd1 00:06:01.520 18:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.520 18:00:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.520 1+0 records in 00:06:01.520 1+0 records out 00:06:01.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178496 s, 22.9 MB/s 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.520 18:00:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.521 18:00:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.521 { 00:06:01.521 "nbd_device": "/dev/nbd0", 00:06:01.521 "bdev_name": "Malloc0" 00:06:01.521 }, 00:06:01.521 { 00:06:01.521 "nbd_device": "/dev/nbd1", 00:06:01.521 "bdev_name": "Malloc1" 00:06:01.521 } 00:06:01.521 ]' 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.521 { 00:06:01.521 "nbd_device": "/dev/nbd0", 00:06:01.521 "bdev_name": "Malloc0" 00:06:01.521 }, 00:06:01.521 { 00:06:01.521 "nbd_device": "/dev/nbd1", 00:06:01.521 "bdev_name": "Malloc1" 00:06:01.521 } 00:06:01.521 ]' 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.521 /dev/nbd1' 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.521 /dev/nbd1' 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.521 18:00:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.782 256+0 records in 00:06:01.782 256+0 records out 00:06:01.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118014 s, 88.9 MB/s 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.782 256+0 records in 00:06:01.782 256+0 records out 00:06:01.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179694 s, 58.4 MB/s 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.782 256+0 records in 00:06:01.782 256+0 records out 00:06:01.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145532 s, 72.1 MB/s 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.782 18:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.783 18:00:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.044 18:00:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.045 18:00:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.045 18:00:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.307 18:00:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.307 18:00:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.569 18:00:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.569 [2024-12-13 18:00:36.909585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.569 [2024-12-13 18:00:36.925101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.569 [2024-12-13 18:00:36.925104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.830 [2024-12-13 18:00:36.954058] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.830 [2024-12-13 18:00:36.954093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.121 spdk_app_start Round 2 00:06:06.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.121 18:00:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.121 18:00:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.121 18:00:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72091 /var/tmp/spdk-nbd.sock 00:06:06.121 18:00:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72091 ']' 00:06:06.121 18:00:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.121 18:00:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.121 18:00:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.121 18:00:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.121 18:00:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.121 18:00:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.121 18:00:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.121 18:00:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.121 Malloc0 00:06:06.121 18:00:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.121 Malloc1 00:06:06.121 18:00:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.122 18:00:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.381 /dev/nbd0 00:06:06.381 18:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.381 18:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.381 1+0 records in 00:06:06.381 1+0 records out 00:06:06.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225909 s, 18.1 MB/s 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.381 18:00:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.381 18:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.381 18:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.381 18:00:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.640 /dev/nbd1 00:06:06.640 18:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.640 18:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.640 1+0 records in 00:06:06.640 1+0 records out 00:06:06.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223521 s, 18.3 MB/s 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.640 18:00:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.640 18:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.640 18:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.640 18:00:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.640 18:00:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.640 18:00:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.915 { 00:06:06.915 "nbd_device": "/dev/nbd0", 00:06:06.915 "bdev_name": "Malloc0" 00:06:06.915 }, 00:06:06.915 { 00:06:06.915 "nbd_device": "/dev/nbd1", 00:06:06.915 "bdev_name": "Malloc1" 00:06:06.915 } 00:06:06.915 ]' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.915 { 00:06:06.915 "nbd_device": "/dev/nbd0", 00:06:06.915 "bdev_name": "Malloc0" 00:06:06.915 }, 00:06:06.915 { 00:06:06.915 "nbd_device": "/dev/nbd1", 00:06:06.915 "bdev_name": "Malloc1" 00:06:06.915 } 00:06:06.915 ]' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.915 /dev/nbd1' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.915 /dev/nbd1' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.915 256+0 records in 00:06:06.915 256+0 records out 00:06:06.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120014 s, 87.4 MB/s 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.915 256+0 records in 00:06:06.915 256+0 records out 00:06:06.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174584 s, 60.1 MB/s 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.915 256+0 records in 00:06:06.915 256+0 records out 00:06:06.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182957 s, 57.3 MB/s 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.915 18:00:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.174 18:00:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.433 18:00:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.692 18:00:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.692 18:00:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.950 18:00:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.950 [2024-12-13 18:00:42.168322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.950 [2024-12-13 18:00:42.184003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.950 [2024-12-13 18:00:42.184006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.950 [2024-12-13 18:00:42.212639] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.950 [2024-12-13 18:00:42.212680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.237 18:00:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72091 /var/tmp/spdk-nbd.sock 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72091 ']' 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.237 18:00:45 event.app_repeat -- event/event.sh@39 -- # killprocess 72091 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72091 ']' 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72091 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72091 00:06:11.237 killing process with pid 72091 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72091' 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72091 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72091 00:06:11.237 spdk_app_start is called in Round 0. 00:06:11.237 Shutdown signal received, stop current app iteration 00:06:11.237 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:11.237 spdk_app_start is called in Round 1. 00:06:11.237 Shutdown signal received, stop current app iteration 00:06:11.237 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:11.237 spdk_app_start is called in Round 2. 00:06:11.237 Shutdown signal received, stop current app iteration 00:06:11.237 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:11.237 spdk_app_start is called in Round 3. 00:06:11.237 Shutdown signal received, stop current app iteration 00:06:11.237 18:00:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.237 18:00:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:11.237 00:06:11.237 real 0m16.896s 00:06:11.237 user 0m37.973s 00:06:11.237 sys 0m1.976s 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.237 18:00:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.237 ************************************ 00:06:11.237 END TEST app_repeat 00:06:11.237 ************************************ 00:06:11.237 18:00:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.237 18:00:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.237 18:00:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.237 18:00:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.237 18:00:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.237 ************************************ 00:06:11.237 START TEST cpu_locks 00:06:11.237 ************************************ 00:06:11.237 18:00:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.237 * Looking for test storage... 00:06:11.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:11.237 18:00:45 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.237 18:00:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.237 18:00:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.237 18:00:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:11.237 18:00:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.238 18:00:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:11.238 18:00:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.238 18:00:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.238 18:00:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.238 18:00:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.238 --rc genhtml_branch_coverage=1 00:06:11.238 --rc genhtml_function_coverage=1 00:06:11.238 --rc genhtml_legend=1 00:06:11.238 --rc geninfo_all_blocks=1 00:06:11.238 --rc geninfo_unexecuted_blocks=1 00:06:11.238 00:06:11.238 ' 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.238 --rc genhtml_branch_coverage=1 00:06:11.238 --rc genhtml_function_coverage=1 00:06:11.238 --rc genhtml_legend=1 00:06:11.238 --rc geninfo_all_blocks=1 00:06:11.238 --rc geninfo_unexecuted_blocks=1 00:06:11.238 00:06:11.238 ' 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.238 --rc genhtml_branch_coverage=1 00:06:11.238 --rc genhtml_function_coverage=1 00:06:11.238 --rc genhtml_legend=1 00:06:11.238 --rc geninfo_all_blocks=1 00:06:11.238 --rc geninfo_unexecuted_blocks=1 00:06:11.238 00:06:11.238 ' 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.238 --rc genhtml_branch_coverage=1 00:06:11.238 --rc genhtml_function_coverage=1 00:06:11.238 --rc genhtml_legend=1 00:06:11.238 --rc geninfo_all_blocks=1 00:06:11.238 --rc geninfo_unexecuted_blocks=1 00:06:11.238 00:06:11.238 ' 00:06:11.238 18:00:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:11.238 18:00:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:11.238 18:00:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:11.238 18:00:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.238 18:00:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.496 ************************************ 00:06:11.496 START TEST default_locks 00:06:11.496 ************************************ 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72516 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72516 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72516 ']' 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.496 18:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.496 [2024-12-13 18:00:45.691516] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:11.496 [2024-12-13 18:00:45.691642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72516 ] 00:06:11.496 [2024-12-13 18:00:45.832552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.496 [2024-12-13 18:00:45.849957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72516 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72516 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72516 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72516 ']' 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72516 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72516 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.432 killing process with pid 72516 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72516' 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72516 00:06:12.432 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72516 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72516 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72516 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72516 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72516 ']' 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.693 ERROR: process (pid: 72516) is no longer running 00:06:12.693 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72516) - No such process 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.693 00:06:12.693 real 0m1.335s 00:06:12.693 user 0m1.409s 00:06:12.693 sys 0m0.358s 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.693 18:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.693 ************************************ 00:06:12.693 END TEST default_locks 00:06:12.693 ************************************ 00:06:12.693 18:00:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:12.693 18:00:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.693 18:00:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.693 18:00:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.693 ************************************ 00:06:12.693 START TEST default_locks_via_rpc 00:06:12.693 ************************************ 00:06:12.693 18:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:12.693 18:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72558 00:06:12.693 18:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72558 00:06:12.694 18:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72558 ']' 00:06:12.694 18:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.694 18:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.694 18:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.694 18:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.694 18:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.694 18:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.954 [2024-12-13 18:00:47.088407] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:12.954 [2024-12-13 18:00:47.088591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72558 ] 00:06:12.954 [2024-12-13 18:00:47.231181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.954 [2024-12-13 18:00:47.248147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72558 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72558 00:06:13.897 18:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72558 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72558 ']' 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72558 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72558 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.897 killing process with pid 72558 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72558' 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72558 00:06:13.897 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72558 00:06:14.159 ************************************ 00:06:14.159 END TEST default_locks_via_rpc 00:06:14.159 ************************************ 00:06:14.159 00:06:14.159 real 0m1.385s 00:06:14.159 user 0m1.481s 00:06:14.159 sys 0m0.392s 00:06:14.159 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.159 18:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.159 18:00:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:14.159 18:00:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.159 18:00:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.159 18:00:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.159 ************************************ 00:06:14.159 START TEST non_locking_app_on_locked_coremask 00:06:14.159 ************************************ 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72604 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72604 /var/tmp/spdk.sock 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72604 ']' 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.159 18:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.159 [2024-12-13 18:00:48.478017] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:14.159 [2024-12-13 18:00:48.478257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72604 ] 00:06:14.418 [2024-12-13 18:00:48.611147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.418 [2024-12-13 18:00:48.628191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72615 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72615 /var/tmp/spdk2.sock 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72615 ']' 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.985 18:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.985 [2024-12-13 18:00:49.354397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:14.985 [2024-12-13 18:00:49.354707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72615 ] 00:06:15.245 [2024-12-13 18:00:49.505664] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.245 [2024-12-13 18:00:49.505708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.245 [2024-12-13 18:00:49.538918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.818 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.818 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.818 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72604 00:06:15.818 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72604 00:06:15.818 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.389 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72604 00:06:16.389 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72604 ']' 00:06:16.389 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72604 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72604 00:06:16.390 killing process with pid 72604 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72604' 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72604 00:06:16.390 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72604 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72615 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72615 ']' 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72615 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72615 00:06:16.652 killing process with pid 72615 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72615' 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72615 00:06:16.652 18:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72615 00:06:16.914 ************************************ 00:06:16.914 END TEST non_locking_app_on_locked_coremask 00:06:16.914 ************************************ 00:06:16.914 00:06:16.914 real 0m2.786s 00:06:16.914 user 0m3.099s 00:06:16.914 sys 0m0.730s 00:06:16.914 18:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.914 18:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 18:00:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.914 18:00:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.914 18:00:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.914 18:00:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 ************************************ 00:06:16.914 START TEST locking_app_on_unlocked_coremask 00:06:16.914 ************************************ 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72673 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72673 /var/tmp/spdk.sock 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72673 ']' 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 18:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:17.175 [2024-12-13 18:00:51.298689] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:17.175 [2024-12-13 18:00:51.298801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72673 ] 00:06:17.175 [2024-12-13 18:00:51.436914] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.175 [2024-12-13 18:00:51.436948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.175 [2024-12-13 18:00:51.453864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72689 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72689 /var/tmp/spdk2.sock 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72689 ']' 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.111 18:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.111 [2024-12-13 18:00:52.195240] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:18.111 [2024-12-13 18:00:52.195724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 00:06:18.111 [2024-12-13 18:00:52.350421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.111 [2024-12-13 18:00:52.383324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.677 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.677 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.677 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72689 00:06:18.677 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72689 00:06:18.677 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72673 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72673 ']' 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72673 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72673 00:06:19.242 killing process with pid 72673 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.242 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.243 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72673' 00:06:19.243 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72673 00:06:19.243 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72673 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72689 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72689 ']' 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72689 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72689 00:06:19.501 killing process with pid 72689 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72689' 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72689 00:06:19.501 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72689 00:06:19.760 00:06:19.760 real 0m2.758s 00:06:19.760 user 0m3.104s 00:06:19.760 sys 0m0.703s 00:06:19.760 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.760 18:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.760 ************************************ 00:06:19.760 END TEST locking_app_on_unlocked_coremask 00:06:19.760 ************************************ 00:06:19.760 18:00:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.760 18:00:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.760 18:00:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.760 18:00:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.760 ************************************ 00:06:19.760 START TEST locking_app_on_locked_coremask 00:06:19.760 ************************************ 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:19.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72747 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72747 /var/tmp/spdk.sock 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72747 ']' 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.760 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.760 [2024-12-13 18:00:54.104660] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:19.760 [2024-12-13 18:00:54.104783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72747 ] 00:06:20.018 [2024-12-13 18:00:54.245064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.018 [2024-12-13 18:00:54.263239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72763 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72763 /var/tmp/spdk2.sock 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72763 /var/tmp/spdk2.sock 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72763 /var/tmp/spdk2.sock 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72763 ']' 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.585 18:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.843 [2024-12-13 18:00:54.965499] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:20.843 [2024-12-13 18:00:54.965978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72763 ] 00:06:20.843 [2024-12-13 18:00:55.116686] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72747 has claimed it. 00:06:20.843 [2024-12-13 18:00:55.116739] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.409 ERROR: process (pid: 72763) is no longer running 00:06:21.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72763) - No such process 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72747 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72747 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72747 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72747 ']' 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72747 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.409 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72747 00:06:21.790 killing process with pid 72747 00:06:21.790 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.790 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.790 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72747' 00:06:21.790 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72747 00:06:21.790 18:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72747 00:06:21.790 00:06:21.790 real 0m1.974s 00:06:21.790 user 0m2.219s 00:06:21.790 sys 0m0.444s 00:06:21.790 18:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.790 ************************************ 00:06:21.790 END TEST locking_app_on_locked_coremask 00:06:21.790 ************************************ 00:06:21.790 18:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.790 18:00:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.790 18:00:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.790 18:00:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.790 18:00:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.790 ************************************ 00:06:21.790 START TEST locking_overlapped_coremask 00:06:21.790 ************************************ 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72805 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72805 /var/tmp/spdk.sock 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72805 ']' 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.790 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.790 [2024-12-13 18:00:56.132905] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.790 [2024-12-13 18:00:56.133032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72805 ] 00:06:22.120 [2024-12-13 18:00:56.275854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.120 [2024-12-13 18:00:56.294592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.120 [2024-12-13 18:00:56.294824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.120 [2024-12-13 18:00:56.294837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72823 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72823 /var/tmp/spdk2.sock 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72823 /var/tmp/spdk2.sock 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72823 /var/tmp/spdk2.sock 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72823 ']' 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.694 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.695 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.695 18:00:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.695 [2024-12-13 18:00:57.043686] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:22.695 [2024-12-13 18:00:57.044027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72823 ] 00:06:22.955 [2024-12-13 18:00:57.204820] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72805 has claimed it. 00:06:22.955 [2024-12-13 18:00:57.204900] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.523 ERROR: process (pid: 72823) is no longer running 00:06:23.523 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72823) - No such process 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72805 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72805 ']' 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72805 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72805 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72805' 00:06:23.523 killing process with pid 72805 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72805 00:06:23.523 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72805 00:06:23.783 00:06:23.783 real 0m1.856s 00:06:23.783 user 0m5.217s 00:06:23.783 sys 0m0.358s 00:06:23.783 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.783 ************************************ 00:06:23.783 END TEST locking_overlapped_coremask 00:06:23.783 18:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.783 ************************************ 00:06:23.784 18:00:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.784 18:00:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.784 18:00:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.784 18:00:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.784 ************************************ 00:06:23.784 START TEST locking_overlapped_coremask_via_rpc 00:06:23.784 ************************************ 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72865 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72865 /var/tmp/spdk.sock 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72865 ']' 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.784 18:00:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.784 [2024-12-13 18:00:58.046906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:23.784 [2024-12-13 18:00:58.047177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72865 ] 00:06:24.044 [2024-12-13 18:00:58.192925] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.044 [2024-12-13 18:00:58.192970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.044 [2024-12-13 18:00:58.217759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.044 [2024-12-13 18:00:58.218086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.044 [2024-12-13 18:00:58.218174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72883 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72883 /var/tmp/spdk2.sock 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72883 ']' 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.615 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.616 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.616 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.616 18:00:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.616 [2024-12-13 18:00:58.897580] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:24.616 [2024-12-13 18:00:58.897834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72883 ] 00:06:24.876 [2024-12-13 18:00:59.055057] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.876 [2024-12-13 18:00:59.055110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.876 [2024-12-13 18:00:59.095060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.876 [2024-12-13 18:00:59.095209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.876 [2024-12-13 18:00:59.095322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.448 [2024-12-13 18:00:59.788396] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72865 has claimed it. 00:06:25.448 request: 00:06:25.448 { 00:06:25.448 "method": "framework_enable_cpumask_locks", 00:06:25.448 "req_id": 1 00:06:25.448 } 00:06:25.448 Got JSON-RPC error response 00:06:25.448 response: 00:06:25.448 { 00:06:25.448 "code": -32603, 00:06:25.448 "message": "Failed to claim CPU core: 2" 00:06:25.448 } 00:06:25.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72865 /var/tmp/spdk.sock 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72865 ']' 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.448 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.449 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.449 18:00:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.708 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.708 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.708 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72883 /var/tmp/spdk2.sock 00:06:25.708 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72883 ']' 00:06:25.709 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.709 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.709 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.709 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.709 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.969 ************************************ 00:06:25.969 END TEST locking_overlapped_coremask_via_rpc 00:06:25.969 ************************************ 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.969 00:06:25.969 real 0m2.246s 00:06:25.969 user 0m1.043s 00:06:25.969 sys 0m0.128s 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.969 18:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.969 18:01:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.969 18:01:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72865 ]] 00:06:25.969 18:01:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72865 00:06:25.969 18:01:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72865 ']' 00:06:25.969 18:01:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72865 00:06:25.969 18:01:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.969 18:01:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.969 18:01:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72865 00:06:25.970 killing process with pid 72865 00:06:25.970 18:01:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.970 18:01:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.970 18:01:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72865' 00:06:25.970 18:01:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72865 00:06:25.970 18:01:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72865 00:06:26.231 18:01:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72883 ]] 00:06:26.231 18:01:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72883 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72883 ']' 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72883 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72883 00:06:26.231 killing process with pid 72883 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72883' 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72883 00:06:26.231 18:01:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72883 00:06:26.493 18:01:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.493 Process with pid 72865 is not found 00:06:26.493 Process with pid 72883 is not found 00:06:26.493 18:01:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.493 18:01:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72865 ]] 00:06:26.493 18:01:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72865 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72865 ']' 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72865 00:06:26.493 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72865) - No such process 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72865 is not found' 00:06:26.493 18:01:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72883 ]] 00:06:26.493 18:01:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72883 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72883 ']' 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72883 00:06:26.493 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72883) - No such process 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72883 is not found' 00:06:26.493 18:01:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.493 ************************************ 00:06:26.493 END TEST cpu_locks 00:06:26.493 ************************************ 00:06:26.493 00:06:26.493 real 0m15.270s 00:06:26.493 user 0m27.677s 00:06:26.493 sys 0m3.771s 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.493 18:01:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.493 ************************************ 00:06:26.493 END TEST event 00:06:26.493 ************************************ 00:06:26.493 00:06:26.493 real 0m38.581s 00:06:26.493 user 1m16.609s 00:06:26.493 sys 0m6.442s 00:06:26.493 18:01:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.493 18:01:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.493 18:01:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.493 18:01:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.493 18:01:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.493 18:01:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.493 ************************************ 00:06:26.493 START TEST thread 00:06:26.493 ************************************ 00:06:26.493 18:01:00 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.757 * Looking for test storage... 00:06:26.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.757 18:01:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.757 18:01:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.757 18:01:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.757 18:01:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.757 18:01:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.757 18:01:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.757 18:01:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.757 18:01:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.757 18:01:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.757 18:01:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.757 18:01:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.757 18:01:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:26.757 18:01:00 thread -- scripts/common.sh@345 -- # : 1 00:06:26.757 18:01:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.757 18:01:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.757 18:01:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:26.757 18:01:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:26.757 18:01:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.757 18:01:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:26.757 18:01:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.757 18:01:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:26.757 18:01:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:26.757 18:01:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.757 18:01:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:26.757 18:01:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.757 18:01:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.757 18:01:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.757 18:01:00 thread -- scripts/common.sh@368 -- # return 0 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.757 --rc genhtml_branch_coverage=1 00:06:26.757 --rc genhtml_function_coverage=1 00:06:26.757 --rc genhtml_legend=1 00:06:26.757 --rc geninfo_all_blocks=1 00:06:26.757 --rc geninfo_unexecuted_blocks=1 00:06:26.757 00:06:26.757 ' 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.757 --rc genhtml_branch_coverage=1 00:06:26.757 --rc genhtml_function_coverage=1 00:06:26.757 --rc genhtml_legend=1 00:06:26.757 --rc geninfo_all_blocks=1 00:06:26.757 --rc geninfo_unexecuted_blocks=1 00:06:26.757 00:06:26.757 ' 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.757 --rc genhtml_branch_coverage=1 00:06:26.757 --rc genhtml_function_coverage=1 00:06:26.757 --rc genhtml_legend=1 00:06:26.757 --rc geninfo_all_blocks=1 00:06:26.757 --rc geninfo_unexecuted_blocks=1 00:06:26.757 00:06:26.757 ' 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.757 --rc genhtml_branch_coverage=1 00:06:26.757 --rc genhtml_function_coverage=1 00:06:26.757 --rc genhtml_legend=1 00:06:26.757 --rc geninfo_all_blocks=1 00:06:26.757 --rc geninfo_unexecuted_blocks=1 00:06:26.757 00:06:26.757 ' 00:06:26.757 18:01:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.757 18:01:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.757 ************************************ 00:06:26.757 START TEST thread_poller_perf 00:06:26.757 ************************************ 00:06:26.757 18:01:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.757 [2024-12-13 18:01:00.980808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:26.757 [2024-12-13 18:01:00.980979] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73010 ] 00:06:26.757 [2024-12-13 18:01:01.119104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.019 [2024-12-13 18:01:01.135586] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.019 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:27.962 [2024-12-13T18:01:02.339Z] ====================================== 00:06:27.962 [2024-12-13T18:01:02.339Z] busy:2611578680 (cyc) 00:06:27.962 [2024-12-13T18:01:02.339Z] total_run_count: 413000 00:06:27.962 [2024-12-13T18:01:02.339Z] tsc_hz: 2600000000 (cyc) 00:06:27.962 [2024-12-13T18:01:02.339Z] ====================================== 00:06:27.962 [2024-12-13T18:01:02.339Z] poller_cost: 6323 (cyc), 2431 (nsec) 00:06:27.962 00:06:27.962 real 0m1.220s 00:06:27.962 user 0m1.071s 00:06:27.962 sys 0m0.044s 00:06:27.962 18:01:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.962 18:01:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.962 ************************************ 00:06:27.962 END TEST thread_poller_perf 00:06:27.962 ************************************ 00:06:27.962 18:01:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.962 18:01:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:27.962 18:01:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.962 18:01:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.962 ************************************ 00:06:27.962 START TEST thread_poller_perf 00:06:27.962 ************************************ 00:06:27.962 18:01:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.962 [2024-12-13 18:01:02.242582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:27.962 [2024-12-13 18:01:02.242695] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73041 ] 00:06:28.223 [2024-12-13 18:01:02.383499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.223 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:28.223 [2024-12-13 18:01:02.400676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.162 [2024-12-13T18:01:03.539Z] ====================================== 00:06:29.162 [2024-12-13T18:01:03.539Z] busy:2602979166 (cyc) 00:06:29.162 [2024-12-13T18:01:03.539Z] total_run_count: 4904000 00:06:29.162 [2024-12-13T18:01:03.539Z] tsc_hz: 2600000000 (cyc) 00:06:29.162 [2024-12-13T18:01:03.539Z] ====================================== 00:06:29.162 [2024-12-13T18:01:03.539Z] poller_cost: 530 (cyc), 203 (nsec) 00:06:29.162 ************************************ 00:06:29.162 END TEST thread_poller_perf 00:06:29.162 ************************************ 00:06:29.162 00:06:29.162 real 0m1.217s 00:06:29.162 user 0m1.060s 00:06:29.162 sys 0m0.051s 00:06:29.162 18:01:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.162 18:01:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.162 18:01:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:29.162 ************************************ 00:06:29.162 END TEST thread 00:06:29.162 ************************************ 00:06:29.162 00:06:29.162 real 0m2.662s 00:06:29.162 user 0m2.241s 00:06:29.162 sys 0m0.217s 00:06:29.162 18:01:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.162 18:01:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.162 18:01:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:29.162 18:01:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.162 18:01:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.162 18:01:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.162 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.162 ************************************ 00:06:29.162 START TEST app_cmdline 00:06:29.162 ************************************ 00:06:29.162 18:01:03 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.423 * Looking for test storage... 00:06:29.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.423 18:01:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.423 --rc genhtml_branch_coverage=1 00:06:29.423 --rc genhtml_function_coverage=1 00:06:29.423 --rc genhtml_legend=1 00:06:29.423 --rc geninfo_all_blocks=1 00:06:29.423 --rc geninfo_unexecuted_blocks=1 00:06:29.423 00:06:29.423 ' 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.423 --rc genhtml_branch_coverage=1 00:06:29.423 --rc genhtml_function_coverage=1 00:06:29.423 --rc genhtml_legend=1 00:06:29.423 --rc geninfo_all_blocks=1 00:06:29.423 --rc geninfo_unexecuted_blocks=1 00:06:29.423 00:06:29.423 ' 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.423 --rc genhtml_branch_coverage=1 00:06:29.423 --rc genhtml_function_coverage=1 00:06:29.423 --rc genhtml_legend=1 00:06:29.423 --rc geninfo_all_blocks=1 00:06:29.423 --rc geninfo_unexecuted_blocks=1 00:06:29.423 00:06:29.423 ' 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.423 --rc genhtml_branch_coverage=1 00:06:29.423 --rc genhtml_function_coverage=1 00:06:29.423 --rc genhtml_legend=1 00:06:29.423 --rc geninfo_all_blocks=1 00:06:29.423 --rc geninfo_unexecuted_blocks=1 00:06:29.423 00:06:29.423 ' 00:06:29.423 18:01:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.423 18:01:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73119 00:06:29.423 18:01:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73119 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73119 ']' 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.423 18:01:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.423 18:01:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.423 [2024-12-13 18:01:03.715339] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:29.423 [2024-12-13 18:01:03.715608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73119 ] 00:06:29.684 [2024-12-13 18:01:03.854767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.684 [2024-12-13 18:01:03.871011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.255 18:01:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.255 18:01:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:30.255 18:01:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:30.516 { 00:06:30.516 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:30.516 "fields": { 00:06:30.516 "major": 25, 00:06:30.516 "minor": 1, 00:06:30.516 "patch": 0, 00:06:30.516 "suffix": "-pre", 00:06:30.516 "commit": "e01cb43b8" 00:06:30.516 } 00:06:30.516 } 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.516 18:01:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:30.516 18:01:04 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.776 request: 00:06:30.776 { 00:06:30.776 "method": "env_dpdk_get_mem_stats", 00:06:30.776 "req_id": 1 00:06:30.776 } 00:06:30.776 Got JSON-RPC error response 00:06:30.776 response: 00:06:30.776 { 00:06:30.776 "code": -32601, 00:06:30.776 "message": "Method not found" 00:06:30.776 } 00:06:30.776 18:01:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:30.776 18:01:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.776 18:01:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.776 18:01:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.776 18:01:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73119 00:06:30.776 18:01:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73119 ']' 00:06:30.776 18:01:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73119 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73119 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.776 killing process with pid 73119 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73119' 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 73119 00:06:30.776 18:01:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 73119 00:06:31.036 00:06:31.036 real 0m1.736s 00:06:31.036 user 0m2.109s 00:06:31.036 sys 0m0.376s 00:06:31.036 18:01:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.036 ************************************ 00:06:31.036 END TEST app_cmdline 00:06:31.036 ************************************ 00:06:31.036 18:01:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.036 18:01:05 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.036 18:01:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.036 18:01:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.036 18:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.036 ************************************ 00:06:31.036 START TEST version 00:06:31.036 ************************************ 00:06:31.036 18:01:05 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.036 * Looking for test storage... 00:06:31.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.036 18:01:05 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.036 18:01:05 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.036 18:01:05 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.036 18:01:05 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.036 18:01:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.036 18:01:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.036 18:01:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.036 18:01:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.036 18:01:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.036 18:01:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.036 18:01:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.036 18:01:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.036 18:01:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.036 18:01:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.036 18:01:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.036 18:01:05 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.036 18:01:05 version -- scripts/common.sh@345 -- # : 1 00:06:31.036 18:01:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.036 18:01:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.036 18:01:05 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.036 18:01:05 version -- scripts/common.sh@353 -- # local d=1 00:06:31.036 18:01:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.036 18:01:05 version -- scripts/common.sh@355 -- # echo 1 00:06:31.036 18:01:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.036 18:01:05 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.036 18:01:05 version -- scripts/common.sh@353 -- # local d=2 00:06:31.036 18:01:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.036 18:01:05 version -- scripts/common.sh@355 -- # echo 2 00:06:31.036 18:01:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.036 18:01:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.036 18:01:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.297 18:01:05 version -- scripts/common.sh@368 -- # return 0 00:06:31.297 18:01:05 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.297 18:01:05 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.297 --rc genhtml_branch_coverage=1 00:06:31.297 --rc genhtml_function_coverage=1 00:06:31.297 --rc genhtml_legend=1 00:06:31.297 --rc geninfo_all_blocks=1 00:06:31.297 --rc geninfo_unexecuted_blocks=1 00:06:31.297 00:06:31.297 ' 00:06:31.297 18:01:05 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.297 --rc genhtml_branch_coverage=1 00:06:31.297 --rc genhtml_function_coverage=1 00:06:31.297 --rc genhtml_legend=1 00:06:31.297 --rc geninfo_all_blocks=1 00:06:31.297 --rc geninfo_unexecuted_blocks=1 00:06:31.297 00:06:31.297 ' 00:06:31.297 18:01:05 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.297 --rc genhtml_branch_coverage=1 00:06:31.297 --rc genhtml_function_coverage=1 00:06:31.297 --rc genhtml_legend=1 00:06:31.297 --rc geninfo_all_blocks=1 00:06:31.297 --rc geninfo_unexecuted_blocks=1 00:06:31.297 00:06:31.297 ' 00:06:31.297 18:01:05 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.297 --rc genhtml_branch_coverage=1 00:06:31.297 --rc genhtml_function_coverage=1 00:06:31.297 --rc genhtml_legend=1 00:06:31.297 --rc geninfo_all_blocks=1 00:06:31.297 --rc geninfo_unexecuted_blocks=1 00:06:31.297 00:06:31.297 ' 00:06:31.297 18:01:05 version -- app/version.sh@17 -- # get_header_version major 00:06:31.297 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.297 18:01:05 version -- app/version.sh@17 -- # major=25 00:06:31.297 18:01:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.297 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:06:31.297 18:01:05 version -- app/version.sh@18 -- # minor=1 00:06:31.297 18:01:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.297 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.297 18:01:05 version -- app/version.sh@19 -- # patch=0 00:06:31.297 18:01:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.297 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.297 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:06:31.297 18:01:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.297 18:01:05 version -- app/version.sh@22 -- # version=25.1 00:06:31.297 18:01:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.297 18:01:05 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.297 18:01:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.297 18:01:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.297 18:01:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.297 18:01:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.297 00:06:31.297 real 0m0.179s 00:06:31.297 user 0m0.122s 00:06:31.297 sys 0m0.087s 00:06:31.297 18:01:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.297 18:01:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.297 ************************************ 00:06:31.297 END TEST version 00:06:31.297 ************************************ 00:06:31.297 18:01:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.297 18:01:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:31.298 18:01:05 -- spdk/autotest.sh@194 -- # uname -s 00:06:31.298 18:01:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:31.298 18:01:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.298 18:01:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.298 18:01:05 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:31.298 18:01:05 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.298 18:01:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.298 18:01:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.298 18:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 ************************************ 00:06:31.298 START TEST blockdev_nvme 00:06:31.298 ************************************ 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.298 * Looking for test storage... 00:06:31.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.298 18:01:05 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.298 --rc genhtml_branch_coverage=1 00:06:31.298 --rc genhtml_function_coverage=1 00:06:31.298 --rc genhtml_legend=1 00:06:31.298 --rc geninfo_all_blocks=1 00:06:31.298 --rc geninfo_unexecuted_blocks=1 00:06:31.298 00:06:31.298 ' 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.298 --rc genhtml_branch_coverage=1 00:06:31.298 --rc genhtml_function_coverage=1 00:06:31.298 --rc genhtml_legend=1 00:06:31.298 --rc geninfo_all_blocks=1 00:06:31.298 --rc geninfo_unexecuted_blocks=1 00:06:31.298 00:06:31.298 ' 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.298 --rc genhtml_branch_coverage=1 00:06:31.298 --rc genhtml_function_coverage=1 00:06:31.298 --rc genhtml_legend=1 00:06:31.298 --rc geninfo_all_blocks=1 00:06:31.298 --rc geninfo_unexecuted_blocks=1 00:06:31.298 00:06:31.298 ' 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.298 --rc genhtml_branch_coverage=1 00:06:31.298 --rc genhtml_function_coverage=1 00:06:31.298 --rc genhtml_legend=1 00:06:31.298 --rc geninfo_all_blocks=1 00:06:31.298 --rc geninfo_unexecuted_blocks=1 00:06:31.298 00:06:31.298 ' 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.298 18:01:05 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73280 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73280 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73280 ']' 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.298 18:01:05 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.298 18:01:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.558 [2024-12-13 18:01:05.712200] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:31.558 [2024-12-13 18:01:05.712329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73280 ] 00:06:31.558 [2024-12-13 18:01:05.852909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.558 [2024-12-13 18:01:05.869264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.130 18:01:06 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.130 18:01:06 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:32.130 18:01:06 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:32.130 18:01:06 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:32.130 18:01:06 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:32.130 18:01:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:32.130 18:01:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:32.392 18:01:06 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:32.392 18:01:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.392 18:01:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:32.654 18:01:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:32.654 18:01:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:32.655 18:01:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "13bdaf3e-c393-474a-81ba-99592285d134"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "13bdaf3e-c393-474a-81ba-99592285d134",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "63bf6f99-5056-4861-ad02-56c13a26da9a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "63bf6f99-5056-4861-ad02-56c13a26da9a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0d7d72e5-25db-49d6-8dff-79fd72301103"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d7d72e5-25db-49d6-8dff-79fd72301103",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7a0c68da-f260-42f0-b2d4-755fbfedbaab"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7a0c68da-f260-42f0-b2d4-755fbfedbaab",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d4eea481-e90e-419c-807c-0b40aaf83172"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d4eea481-e90e-419c-807c-0b40aaf83172",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8b689785-0d19-4c67-b184-9e9aaa021e14"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8b689785-0d19-4c67-b184-9e9aaa021e14",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:32.655 18:01:06 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:32.655 18:01:06 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:32.655 18:01:06 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:32.655 18:01:06 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 73280 00:06:32.655 18:01:06 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73280 ']' 00:06:32.655 18:01:06 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73280 00:06:32.655 18:01:06 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:32.655 18:01:06 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.655 18:01:06 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73280 00:06:32.655 18:01:07 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.655 18:01:07 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.655 killing process with pid 73280 00:06:32.655 18:01:07 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73280' 00:06:32.655 18:01:07 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73280 00:06:32.655 18:01:07 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73280 00:06:32.916 18:01:07 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:32.916 18:01:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:32.916 18:01:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:32.916 18:01:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.916 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.916 ************************************ 00:06:32.916 START TEST bdev_hello_world 00:06:32.916 ************************************ 00:06:32.916 18:01:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.177 [2024-12-13 18:01:07.304919] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:33.177 [2024-12-13 18:01:07.305056] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73353 ] 00:06:33.177 [2024-12-13 18:01:07.447431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.177 [2024-12-13 18:01:07.464282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.749 [2024-12-13 18:01:07.825368] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:33.749 [2024-12-13 18:01:07.825415] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:33.749 [2024-12-13 18:01:07.825431] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:33.749 [2024-12-13 18:01:07.827087] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:33.749 [2024-12-13 18:01:07.827513] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:33.749 [2024-12-13 18:01:07.827542] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:33.749 [2024-12-13 18:01:07.827756] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:33.749 00:06:33.749 [2024-12-13 18:01:07.827786] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:33.749 00:06:33.749 real 0m0.712s 00:06:33.749 user 0m0.469s 00:06:33.749 sys 0m0.141s 00:06:33.749 18:01:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.749 ************************************ 00:06:33.749 END TEST bdev_hello_world 00:06:33.749 18:01:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:33.749 ************************************ 00:06:33.749 18:01:07 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:33.749 18:01:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.749 18:01:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.749 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.749 ************************************ 00:06:33.749 START TEST bdev_bounds 00:06:33.749 ************************************ 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73384 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.749 Process bdevio pid: 73384 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73384' 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73384 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73384 ']' 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.749 18:01:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:33.749 [2024-12-13 18:01:08.042898] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:33.749 [2024-12-13 18:01:08.042999] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73384 ] 00:06:34.010 [2024-12-13 18:01:08.188529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.010 [2024-12-13 18:01:08.209185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.010 [2024-12-13 18:01:08.209418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.010 [2024-12-13 18:01:08.209498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.580 18:01:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.580 18:01:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:34.580 18:01:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:34.840 I/O targets: 00:06:34.840 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:34.840 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:34.840 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:34.840 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:34.840 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:34.840 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:34.840 00:06:34.840 00:06:34.840 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.840 http://cunit.sourceforge.net/ 00:06:34.840 00:06:34.840 00:06:34.840 Suite: bdevio tests on: Nvme3n1 00:06:34.840 Test: blockdev write read block ...passed 00:06:34.840 Test: blockdev write zeroes read block ...passed 00:06:34.840 Test: blockdev write zeroes read no split ...passed 00:06:34.840 Test: blockdev write zeroes read split ...passed 00:06:34.840 Test: blockdev write zeroes read split partial ...passed 00:06:34.840 Test: blockdev reset ...[2024-12-13 18:01:08.987771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:34.840 passed 00:06:34.840 Test: blockdev write read 8 blocks ...[2024-12-13 18:01:08.989734] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:34.840 passed 00:06:34.840 Test: blockdev write read size > 128k ...passed 00:06:34.840 Test: blockdev write read invalid size ...passed 00:06:34.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.840 Test: blockdev write read max offset ...passed 00:06:34.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.840 Test: blockdev writev readv 8 blocks ...passed 00:06:34.840 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.840 Test: blockdev writev readv block ...passed 00:06:34.840 Test: blockdev writev readv size > 128k ...passed 00:06:34.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.840 Test: blockdev comparev and writev ...[2024-12-13 18:01:08.994226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf006000 len:0x1000 00:06:34.840 [2024-12-13 18:01:08.994279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.840 passed 00:06:34.840 Test: blockdev nvme passthru rw ...passed 00:06:34.840 Test: blockdev nvme passthru vendor specific ...[2024-12-13 18:01:08.994715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.840 [2024-12-13 18:01:08.994741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.840 passed 00:06:34.840 Test: blockdev nvme admin passthru ...passed 00:06:34.841 Test: blockdev copy ...passed 00:06:34.841 Suite: bdevio tests on: Nvme2n3 00:06:34.841 Test: blockdev write read block ...passed 00:06:34.841 Test: blockdev write zeroes read block ...passed 00:06:34.841 Test: blockdev write zeroes read no split ...passed 00:06:34.841 Test: blockdev write zeroes read split ...passed 00:06:34.841 Test: blockdev write zeroes read split partial ...passed 00:06:34.841 Test: blockdev reset ...[2024-12-13 18:01:09.010476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:34.841 passed 00:06:34.841 Test: blockdev write read 8 blocks ...[2024-12-13 18:01:09.012355] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:34.841 passed 00:06:34.841 Test: blockdev write read size > 128k ...passed 00:06:34.841 Test: blockdev write read invalid size ...passed 00:06:34.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.841 Test: blockdev write read max offset ...passed 00:06:34.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.841 Test: blockdev writev readv 8 blocks ...passed 00:06:34.841 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.841 Test: blockdev writev readv block ...passed 00:06:34.841 Test: blockdev writev readv size > 128k ...passed 00:06:34.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.841 Test: blockdev comparev and writev ...passed 00:06:34.841 Test: blockdev nvme passthru rw ...[2024-12-13 18:01:09.016260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba402000 len:0x1000 00:06:34.841 [2024-12-13 18:01:09.016300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev nvme passthru vendor specific ...passed 00:06:34.841 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:09.016738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.841 [2024-12-13 18:01:09.016764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev copy ...passed 00:06:34.841 Suite: bdevio tests on: Nvme2n2 00:06:34.841 Test: blockdev write read block ...passed 00:06:34.841 Test: blockdev write zeroes read block ...passed 00:06:34.841 Test: blockdev write zeroes read no split ...passed 00:06:34.841 Test: blockdev write zeroes read split ...passed 00:06:34.841 Test: blockdev write zeroes read split partial ...passed 00:06:34.841 Test: blockdev reset ...[2024-12-13 18:01:09.031191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:34.841 passed 00:06:34.841 Test: blockdev write read 8 blocks ...[2024-12-13 18:01:09.033034] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:34.841 passed 00:06:34.841 Test: blockdev write read size > 128k ...passed 00:06:34.841 Test: blockdev write read invalid size ...passed 00:06:34.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.841 Test: blockdev write read max offset ...passed 00:06:34.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.841 Test: blockdev writev readv 8 blocks ...passed 00:06:34.841 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.841 Test: blockdev writev readv block ...passed 00:06:34.841 Test: blockdev writev readv size > 128k ...passed 00:06:34.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.841 Test: blockdev comparev and writev ...[2024-12-13 18:01:09.036951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d183b000 len:0x1000 00:06:34.841 [2024-12-13 18:01:09.036982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev nvme passthru rw ...passed 00:06:34.841 Test: blockdev nvme passthru vendor specific ...passed 00:06:34.841 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:09.037377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.841 [2024-12-13 18:01:09.037398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev copy ...passed 00:06:34.841 Suite: bdevio tests on: Nvme2n1 00:06:34.841 Test: blockdev write read block ...passed 00:06:34.841 Test: blockdev write zeroes read block ...passed 00:06:34.841 Test: blockdev write zeroes read no split ...passed 00:06:34.841 Test: blockdev write zeroes read split ...passed 00:06:34.841 Test: blockdev write zeroes read split partial ...passed 00:06:34.841 Test: blockdev reset ...[2024-12-13 18:01:09.052707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:34.841 [2024-12-13 18:01:09.054547] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:34.841 passed 00:06:34.841 Test: blockdev write read 8 blocks ...passed 00:06:34.841 Test: blockdev write read size > 128k ...passed 00:06:34.841 Test: blockdev write read invalid size ...passed 00:06:34.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.841 Test: blockdev write read max offset ...passed 00:06:34.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.841 Test: blockdev writev readv 8 blocks ...passed 00:06:34.841 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.841 Test: blockdev writev readv block ...passed 00:06:34.841 Test: blockdev writev readv size > 128k ...passed 00:06:34.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.841 Test: blockdev comparev and writev ...passed 00:06:34.841 Test: blockdev nvme passthru rw ...[2024-12-13 18:01:09.058573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1837000 len:0x1000 00:06:34.841 [2024-12-13 18:01:09.058608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev nvme passthru vendor specific ...[2024-12-13 18:01:09.059030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.841 [2024-12-13 18:01:09.059053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev nvme admin passthru ...passed 00:06:34.841 Test: blockdev copy ...passed 00:06:34.841 Suite: bdevio tests on: Nvme1n1 00:06:34.841 Test: blockdev write read block ...passed 00:06:34.841 Test: blockdev write zeroes read block ...passed 00:06:34.841 Test: blockdev write zeroes read no split ...passed 00:06:34.841 Test: blockdev write zeroes read split ...passed 00:06:34.841 Test: blockdev write zeroes read split partial ...passed 00:06:34.841 Test: blockdev reset ...[2024-12-13 18:01:09.074359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:34.841 passed 00:06:34.841 Test: blockdev write read 8 blocks ...[2024-12-13 18:01:09.075846] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:34.841 passed 00:06:34.841 Test: blockdev write read size > 128k ...passed 00:06:34.841 Test: blockdev write read invalid size ...passed 00:06:34.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.841 Test: blockdev write read max offset ...passed 00:06:34.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.841 Test: blockdev writev readv 8 blocks ...passed 00:06:34.841 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.841 Test: blockdev writev readv block ...passed 00:06:34.841 Test: blockdev writev readv size > 128k ...passed 00:06:34.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.841 Test: blockdev comparev and writev ...[2024-12-13 18:01:09.079833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1833000 len:0x1000 00:06:34.841 [2024-12-13 18:01:09.079869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev nvme passthru rw ...passed 00:06:34.841 Test: blockdev nvme passthru vendor specific ...passed 00:06:34.841 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:09.080298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:34.841 [2024-12-13 18:01:09.080321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:34.841 passed 00:06:34.841 Test: blockdev copy ...passed 00:06:34.841 Suite: bdevio tests on: Nvme0n1 00:06:34.841 Test: blockdev write read block ...passed 00:06:34.841 Test: blockdev write zeroes read block ...passed 00:06:34.841 Test: blockdev write zeroes read no split ...passed 00:06:34.841 Test: blockdev write zeroes read split ...passed 00:06:34.841 Test: blockdev write zeroes read split partial ...passed 00:06:34.842 Test: blockdev reset ...[2024-12-13 18:01:09.096324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:34.842 passed 00:06:34.842 Test: blockdev write read 8 blocks ...[2024-12-13 18:01:09.097968] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:34.842 passed 00:06:34.842 Test: blockdev write read size > 128k ...passed 00:06:34.842 Test: blockdev write read invalid size ...passed 00:06:34.842 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:34.842 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:34.842 Test: blockdev write read max offset ...passed 00:06:34.842 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:34.842 Test: blockdev writev readv 8 blocks ...passed 00:06:34.842 Test: blockdev writev readv 30 x 1block ...passed 00:06:34.842 Test: blockdev writev readv block ...passed 00:06:34.842 Test: blockdev writev readv size > 128k ...passed 00:06:34.842 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:34.842 Test: blockdev comparev and writev ...passed 00:06:34.842 Test: blockdev nvme passthru rw ...[2024-12-13 18:01:09.101237] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:34.842 separate metadata which is not supported yet. 00:06:34.842 passed 00:06:34.842 Test: blockdev nvme passthru vendor specific ...[2024-12-13 18:01:09.101909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:34.842 [2024-12-13 18:01:09.102026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:34.842 passed 00:06:34.842 Test: blockdev nvme admin passthru ...passed 00:06:34.842 Test: blockdev copy ...passed 00:06:34.842 00:06:34.842 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.842 suites 6 6 n/a 0 0 00:06:34.842 tests 138 138 138 0 0 00:06:34.842 asserts 893 893 893 0 n/a 00:06:34.842 00:06:34.842 Elapsed time = 0.304 seconds 00:06:34.842 0 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73384 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73384 ']' 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73384 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73384 00:06:34.842 killing process with pid 73384 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73384' 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73384 00:06:34.842 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73384 00:06:35.101 18:01:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:35.101 00:06:35.101 real 0m1.288s 00:06:35.101 user 0m3.343s 00:06:35.101 sys 0m0.259s 00:06:35.101 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.101 ************************************ 00:06:35.101 END TEST bdev_bounds 00:06:35.101 ************************************ 00:06:35.101 18:01:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:35.101 18:01:09 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.101 18:01:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:35.101 18:01:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.101 18:01:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.101 ************************************ 00:06:35.101 START TEST bdev_nbd 00:06:35.101 ************************************ 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73427 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73427 /var/tmp/spdk-nbd.sock 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73427 ']' 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:35.101 18:01:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:35.101 [2024-12-13 18:01:09.389229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:35.101 [2024-12-13 18:01:09.389349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.360 [2024-12-13 18:01:09.530287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.360 [2024-12-13 18:01:09.549534] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.927 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.185 1+0 records in 00:06:36.185 1+0 records out 00:06:36.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222594 s, 18.4 MB/s 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.185 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.444 1+0 records in 00:06:36.444 1+0 records out 00:06:36.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444129 s, 9.2 MB/s 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.444 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.703 1+0 records in 00:06:36.703 1+0 records out 00:06:36.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344279 s, 11.9 MB/s 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.703 18:01:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.961 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.962 1+0 records in 00:06:36.962 1+0 records out 00:06:36.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521119 s, 7.9 MB/s 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.962 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.220 1+0 records in 00:06:37.220 1+0 records out 00:06:37.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396474 s, 10.3 MB/s 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.220 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.479 1+0 records in 00:06:37.479 1+0 records out 00:06:37.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383392 s, 10.7 MB/s 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd0", 00:06:37.479 "bdev_name": "Nvme0n1" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd1", 00:06:37.479 "bdev_name": "Nvme1n1" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd2", 00:06:37.479 "bdev_name": "Nvme2n1" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd3", 00:06:37.479 "bdev_name": "Nvme2n2" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd4", 00:06:37.479 "bdev_name": "Nvme2n3" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd5", 00:06:37.479 "bdev_name": "Nvme3n1" 00:06:37.479 } 00:06:37.479 ]' 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd0", 00:06:37.479 "bdev_name": "Nvme0n1" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd1", 00:06:37.479 "bdev_name": "Nvme1n1" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd2", 00:06:37.479 "bdev_name": "Nvme2n1" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd3", 00:06:37.479 "bdev_name": "Nvme2n2" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd4", 00:06:37.479 "bdev_name": "Nvme2n3" 00:06:37.479 }, 00:06:37.479 { 00:06:37.479 "nbd_device": "/dev/nbd5", 00:06:37.479 "bdev_name": "Nvme3n1" 00:06:37.479 } 00:06:37.479 ]' 00:06:37.479 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:37.738 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:37.738 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.738 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:37.738 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.738 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:37.738 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.738 18:01:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.738 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.996 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.254 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.255 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.255 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.513 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.771 18:01:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.030 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.288 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:39.288 /dev/nbd0 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.289 1+0 records in 00:06:39.289 1+0 records out 00:06:39.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316075 s, 13.0 MB/s 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.289 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:39.547 /dev/nbd1 00:06:39.547 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.548 1+0 records in 00:06:39.548 1+0 records out 00:06:39.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750396 s, 5.5 MB/s 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.548 18:01:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:39.806 /dev/nbd10 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.806 1+0 records in 00:06:39.806 1+0 records out 00:06:39.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521347 s, 7.9 MB/s 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.806 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:40.066 /dev/nbd11 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.066 1+0 records in 00:06:40.066 1+0 records out 00:06:40.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789171 s, 5.2 MB/s 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.066 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:40.325 /dev/nbd12 00:06:40.325 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:40.325 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:40.325 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:40.325 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.325 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.325 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.326 1+0 records in 00:06:40.326 1+0 records out 00:06:40.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000953481 s, 4.3 MB/s 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.326 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:40.586 /dev/nbd13 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.586 1+0 records in 00:06:40.586 1+0 records out 00:06:40.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463089 s, 8.8 MB/s 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.586 18:01:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.847 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd0", 00:06:40.847 "bdev_name": "Nvme0n1" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd1", 00:06:40.847 "bdev_name": "Nvme1n1" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd10", 00:06:40.847 "bdev_name": "Nvme2n1" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd11", 00:06:40.847 "bdev_name": "Nvme2n2" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd12", 00:06:40.847 "bdev_name": "Nvme2n3" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd13", 00:06:40.847 "bdev_name": "Nvme3n1" 00:06:40.847 } 00:06:40.847 ]' 00:06:40.847 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.847 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd0", 00:06:40.847 "bdev_name": "Nvme0n1" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd1", 00:06:40.847 "bdev_name": "Nvme1n1" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd10", 00:06:40.847 "bdev_name": "Nvme2n1" 00:06:40.847 }, 00:06:40.847 { 00:06:40.847 "nbd_device": "/dev/nbd11", 00:06:40.848 "bdev_name": "Nvme2n2" 00:06:40.848 }, 00:06:40.848 { 00:06:40.848 "nbd_device": "/dev/nbd12", 00:06:40.848 "bdev_name": "Nvme2n3" 00:06:40.848 }, 00:06:40.848 { 00:06:40.848 "nbd_device": "/dev/nbd13", 00:06:40.848 "bdev_name": "Nvme3n1" 00:06:40.848 } 00:06:40.848 ]' 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.848 /dev/nbd1 00:06:40.848 /dev/nbd10 00:06:40.848 /dev/nbd11 00:06:40.848 /dev/nbd12 00:06:40.848 /dev/nbd13' 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.848 /dev/nbd1 00:06:40.848 /dev/nbd10 00:06:40.848 /dev/nbd11 00:06:40.848 /dev/nbd12 00:06:40.848 /dev/nbd13' 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:40.848 256+0 records in 00:06:40.848 256+0 records out 00:06:40.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065994 s, 159 MB/s 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.848 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.114 256+0 records in 00:06:41.114 256+0 records out 00:06:41.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15821 s, 6.6 MB/s 00:06:41.114 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.114 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.114 256+0 records in 00:06:41.114 256+0 records out 00:06:41.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0847571 s, 12.4 MB/s 00:06:41.114 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.114 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:41.114 256+0 records in 00:06:41.114 256+0 records out 00:06:41.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141847 s, 7.4 MB/s 00:06:41.114 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.114 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:41.380 256+0 records in 00:06:41.380 256+0 records out 00:06:41.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0975881 s, 10.7 MB/s 00:06:41.380 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.380 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:41.642 256+0 records in 00:06:41.642 256+0 records out 00:06:41.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101876 s, 10.3 MB/s 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:41.642 256+0 records in 00:06:41.642 256+0 records out 00:06:41.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183862 s, 5.7 MB/s 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.642 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.903 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.165 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.166 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.166 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:42.426 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.686 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.687 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.945 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:43.203 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:43.461 malloc_lvol_verify 00:06:43.461 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:43.720 609e2902-8f77-44bf-b2f0-bc7633bd2a92 00:06:43.720 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:43.720 82ea6701-5d59-4292-8db2-025804a9fdc1 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:43.981 /dev/nbd0 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:43.981 mke2fs 1.47.0 (5-Feb-2023) 00:06:43.981 Discarding device blocks: 0/4096 done 00:06:43.981 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:43.981 00:06:43.981 Allocating group tables: 0/1 done 00:06:43.981 Writing inode tables: 0/1 done 00:06:43.981 Creating journal (1024 blocks): done 00:06:43.981 Writing superblocks and filesystem accounting information: 0/1 done 00:06:43.981 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.981 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73427 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73427 ']' 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73427 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73427 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.240 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.240 killing process with pid 73427 00:06:44.241 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73427' 00:06:44.241 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73427 00:06:44.241 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73427 00:06:44.502 18:01:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:44.502 00:06:44.502 real 0m9.398s 00:06:44.502 user 0m13.470s 00:06:44.502 sys 0m3.137s 00:06:44.502 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.502 18:01:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:44.502 ************************************ 00:06:44.502 END TEST bdev_nbd 00:06:44.502 ************************************ 00:06:44.502 18:01:18 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:44.502 18:01:18 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:44.502 skipping fio tests on NVMe due to multi-ns failures. 00:06:44.502 18:01:18 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:44.502 18:01:18 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:44.502 18:01:18 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:44.502 18:01:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:44.502 18:01:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.502 18:01:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.502 ************************************ 00:06:44.502 START TEST bdev_verify 00:06:44.502 ************************************ 00:06:44.502 18:01:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:44.502 [2024-12-13 18:01:18.828185] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:44.502 [2024-12-13 18:01:18.828300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73795 ] 00:06:44.763 [2024-12-13 18:01:18.975507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.763 [2024-12-13 18:01:18.994785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.763 [2024-12-13 18:01:18.994898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.024 Running I/O for 5 seconds... 00:06:47.352 22528.00 IOPS, 88.00 MiB/s [2024-12-13T18:01:22.669Z] 22624.00 IOPS, 88.38 MiB/s [2024-12-13T18:01:23.611Z] 22570.67 IOPS, 88.17 MiB/s [2024-12-13T18:01:24.553Z] 22656.00 IOPS, 88.50 MiB/s [2024-12-13T18:01:24.553Z] 22796.80 IOPS, 89.05 MiB/s 00:06:50.176 Latency(us) 00:06:50.176 [2024-12-13T18:01:24.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.176 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x0 length 0xbd0bd 00:06:50.176 Nvme0n1 : 5.04 1879.57 7.34 0.00 0.00 67850.76 13510.50 64527.75 00:06:50.176 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:50.176 Nvme0n1 : 5.02 1860.34 7.27 0.00 0.00 68582.11 12905.55 66947.54 00:06:50.176 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x0 length 0xa0000 00:06:50.176 Nvme1n1 : 5.04 1879.06 7.34 0.00 0.00 67736.83 12754.31 62511.26 00:06:50.176 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0xa0000 length 0xa0000 00:06:50.176 Nvme1n1 : 5.06 1871.53 7.31 0.00 0.00 68105.56 10788.23 63317.86 00:06:50.176 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x0 length 0x80000 00:06:50.176 Nvme2n1 : 5.06 1884.90 7.36 0.00 0.00 67433.02 6856.07 60898.07 00:06:50.176 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x80000 length 0x80000 00:06:50.176 Nvme2n1 : 5.06 1871.00 7.31 0.00 0.00 67977.16 11141.12 59284.87 00:06:50.176 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x0 length 0x80000 00:06:50.176 Nvme2n2 : 5.07 1894.21 7.40 0.00 0.00 67090.96 6553.60 56865.08 00:06:50.176 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x80000 length 0x80000 00:06:50.176 Nvme2n2 : 5.06 1870.51 7.31 0.00 0.00 67873.21 11040.30 58881.58 00:06:50.176 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x0 length 0x80000 00:06:50.176 Nvme2n3 : 5.07 1893.70 7.40 0.00 0.00 66978.87 6856.07 60091.47 00:06:50.176 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x80000 length 0x80000 00:06:50.176 Nvme2n3 : 5.07 1870.03 7.30 0.00 0.00 67752.28 10889.06 60494.77 00:06:50.176 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x0 length 0x20000 00:06:50.176 Nvme3n1 : 5.07 1893.18 7.40 0.00 0.00 66863.90 7208.96 63721.16 00:06:50.176 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:50.176 Verification LBA range: start 0x20000 length 0x20000 00:06:50.176 Nvme3n1 : 5.07 1869.51 7.30 0.00 0.00 67648.47 8620.50 63317.86 00:06:50.176 [2024-12-13T18:01:24.553Z] =================================================================================================================== 00:06:50.176 [2024-12-13T18:01:24.553Z] Total : 22537.54 88.04 0.00 0.00 67654.31 6553.60 66947.54 00:06:50.749 00:06:50.749 real 0m6.336s 00:06:50.749 user 0m11.676s 00:06:50.749 sys 0m0.183s 00:06:50.749 18:01:25 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.749 18:01:25 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:50.749 ************************************ 00:06:50.749 END TEST bdev_verify 00:06:50.749 ************************************ 00:06:51.009 18:01:25 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:51.009 18:01:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:51.009 18:01:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.009 18:01:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.009 ************************************ 00:06:51.009 START TEST bdev_verify_big_io 00:06:51.009 ************************************ 00:06:51.009 18:01:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:51.009 [2024-12-13 18:01:25.198221] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:51.009 [2024-12-13 18:01:25.198339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73882 ] 00:06:51.009 [2024-12-13 18:01:25.343039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.009 [2024-12-13 18:01:25.362600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.009 [2024-12-13 18:01:25.362667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.580 Running I/O for 5 seconds... 00:06:56.360 1527.00 IOPS, 95.44 MiB/s [2024-12-13T18:01:32.119Z] 2450.00 IOPS, 153.12 MiB/s [2024-12-13T18:01:32.119Z] 3091.00 IOPS, 193.19 MiB/s 00:06:57.743 Latency(us) 00:06:57.743 [2024-12-13T18:01:32.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.743 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x0 length 0xbd0b 00:06:57.743 Nvme0n1 : 5.75 122.48 7.65 0.00 0.00 998210.17 10939.47 1213121.77 00:06:57.743 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:57.743 Nvme0n1 : 5.64 137.02 8.56 0.00 0.00 867124.17 16938.54 1019538.51 00:06:57.743 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x0 length 0xa000 00:06:57.743 Nvme1n1 : 5.93 125.23 7.83 0.00 0.00 940891.78 102437.81 993727.41 00:06:57.743 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0xa000 length 0xa000 00:06:57.743 Nvme1n1 : 5.75 143.91 8.99 0.00 0.00 827887.86 90742.15 858219.13 00:06:57.743 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x0 length 0x8000 00:06:57.743 Nvme2n1 : 5.94 126.70 7.92 0.00 0.00 904121.68 77030.01 1322818.95 00:06:57.743 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x8000 length 0x8000 00:06:57.743 Nvme2n1 : 5.85 148.70 9.29 0.00 0.00 783572.52 37708.41 764653.88 00:06:57.743 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x0 length 0x8000 00:06:57.743 Nvme2n2 : 5.96 132.55 8.28 0.00 0.00 835686.80 20769.87 1045349.61 00:06:57.743 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x8000 length 0x8000 00:06:57.743 Nvme2n2 : 5.92 151.52 9.47 0.00 0.00 744573.04 65737.65 787238.60 00:06:57.743 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x0 length 0x8000 00:06:57.743 Nvme2n3 : 6.00 136.75 8.55 0.00 0.00 780137.46 15829.46 1768060.46 00:06:57.743 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x8000 length 0x8000 00:06:57.743 Nvme2n3 : 5.86 153.00 9.56 0.00 0.00 720963.74 65737.65 809823.31 00:06:57.743 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x0 length 0x2000 00:06:57.743 Nvme3n1 : 6.06 173.20 10.82 0.00 0.00 602130.00 69.32 1793871.56 00:06:57.743 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.743 Verification LBA range: start 0x2000 length 0x2000 00:06:57.743 Nvme3n1 : 5.93 168.54 10.53 0.00 0.00 638691.17 2306.36 816276.09 00:06:57.743 [2024-12-13T18:01:32.120Z] =================================================================================================================== 00:06:57.743 [2024-12-13T18:01:32.120Z] Total : 1719.60 107.48 0.00 0.00 790694.42 69.32 1793871.56 00:06:58.686 00:06:58.686 real 0m7.620s 00:06:58.686 user 0m14.263s 00:06:58.686 sys 0m0.189s 00:06:58.686 18:01:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.686 18:01:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:58.686 ************************************ 00:06:58.686 END TEST bdev_verify_big_io 00:06:58.686 ************************************ 00:06:58.686 18:01:32 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.686 18:01:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:58.686 18:01:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.686 18:01:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.686 ************************************ 00:06:58.686 START TEST bdev_write_zeroes 00:06:58.686 ************************************ 00:06:58.686 18:01:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.686 [2024-12-13 18:01:32.859300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:58.686 [2024-12-13 18:01:32.859415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73988 ] 00:06:58.687 [2024-12-13 18:01:33.004478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.687 [2024-12-13 18:01:33.022636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.257 Running I/O for 1 seconds... 00:07:00.198 70656.00 IOPS, 276.00 MiB/s 00:07:00.198 Latency(us) 00:07:00.198 [2024-12-13T18:01:34.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.198 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.198 Nvme0n1 : 1.02 11748.45 45.89 0.00 0.00 10874.48 9477.51 20669.05 00:07:00.198 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.198 Nvme1n1 : 1.02 11734.66 45.84 0.00 0.00 10871.01 9275.86 19963.27 00:07:00.198 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.198 Nvme2n1 : 1.02 11721.34 45.79 0.00 0.00 10848.21 9527.93 19055.85 00:07:00.198 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.198 Nvme2n2 : 1.02 11708.18 45.74 0.00 0.00 10835.63 9477.51 18551.73 00:07:00.198 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.198 Nvme2n3 : 1.02 11695.05 45.68 0.00 0.00 10823.65 7713.08 19156.68 00:07:00.198 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.198 Nvme3n1 : 1.02 11681.93 45.63 0.00 0.00 10809.68 6755.25 20568.22 00:07:00.198 [2024-12-13T18:01:34.575Z] =================================================================================================================== 00:07:00.198 [2024-12-13T18:01:34.575Z] Total : 70289.60 274.57 0.00 0.00 10843.78 6755.25 20669.05 00:07:00.458 00:07:00.458 real 0m1.796s 00:07:00.458 user 0m1.529s 00:07:00.458 sys 0m0.157s 00:07:00.458 18:01:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.458 18:01:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:00.458 ************************************ 00:07:00.458 END TEST bdev_write_zeroes 00:07:00.458 ************************************ 00:07:00.458 18:01:34 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.458 18:01:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:00.458 18:01:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.458 18:01:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.458 ************************************ 00:07:00.458 START TEST bdev_json_nonenclosed 00:07:00.458 ************************************ 00:07:00.458 18:01:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.458 [2024-12-13 18:01:34.694932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:00.458 [2024-12-13 18:01:34.695039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74019 ] 00:07:00.719 [2024-12-13 18:01:34.840068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.719 [2024-12-13 18:01:34.859064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.719 [2024-12-13 18:01:34.859143] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:00.719 [2024-12-13 18:01:34.859158] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:00.719 [2024-12-13 18:01:34.859172] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.719 00:07:00.719 real 0m0.283s 00:07:00.719 user 0m0.108s 00:07:00.719 sys 0m0.072s 00:07:00.719 18:01:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.719 ************************************ 00:07:00.719 END TEST bdev_json_nonenclosed 00:07:00.719 18:01:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:00.719 ************************************ 00:07:00.719 18:01:34 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.719 18:01:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:00.719 18:01:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.719 18:01:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.719 ************************************ 00:07:00.719 START TEST bdev_json_nonarray 00:07:00.719 ************************************ 00:07:00.719 18:01:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.719 [2024-12-13 18:01:35.015152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:00.719 [2024-12-13 18:01:35.015280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74050 ] 00:07:00.980 [2024-12-13 18:01:35.161892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.980 [2024-12-13 18:01:35.181026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.980 [2024-12-13 18:01:35.181124] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:00.980 [2024-12-13 18:01:35.181139] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:00.980 [2024-12-13 18:01:35.181151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.980 00:07:00.980 real 0m0.285s 00:07:00.980 user 0m0.112s 00:07:00.980 sys 0m0.070s 00:07:00.980 18:01:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.981 18:01:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:00.981 ************************************ 00:07:00.981 END TEST bdev_json_nonarray 00:07:00.981 ************************************ 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:00.981 18:01:35 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:00.981 ************************************ 00:07:00.981 END TEST blockdev_nvme 00:07:00.981 ************************************ 00:07:00.982 00:07:00.982 real 0m29.784s 00:07:00.982 user 0m46.850s 00:07:00.982 sys 0m4.856s 00:07:00.982 18:01:35 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.982 18:01:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.982 18:01:35 -- spdk/autotest.sh@209 -- # uname -s 00:07:00.982 18:01:35 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:00.982 18:01:35 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:00.982 18:01:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.982 18:01:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.982 18:01:35 -- common/autotest_common.sh@10 -- # set +x 00:07:00.982 ************************************ 00:07:00.982 START TEST blockdev_nvme_gpt 00:07:00.982 ************************************ 00:07:00.982 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:01.243 * Looking for test storage... 00:07:01.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:01.243 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.243 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.243 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.243 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:01.243 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.244 18:01:35 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.244 --rc genhtml_branch_coverage=1 00:07:01.244 --rc genhtml_function_coverage=1 00:07:01.244 --rc genhtml_legend=1 00:07:01.244 --rc geninfo_all_blocks=1 00:07:01.244 --rc geninfo_unexecuted_blocks=1 00:07:01.244 00:07:01.244 ' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.244 --rc genhtml_branch_coverage=1 00:07:01.244 --rc genhtml_function_coverage=1 00:07:01.244 --rc genhtml_legend=1 00:07:01.244 --rc geninfo_all_blocks=1 00:07:01.244 --rc geninfo_unexecuted_blocks=1 00:07:01.244 00:07:01.244 ' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.244 --rc genhtml_branch_coverage=1 00:07:01.244 --rc genhtml_function_coverage=1 00:07:01.244 --rc genhtml_legend=1 00:07:01.244 --rc geninfo_all_blocks=1 00:07:01.244 --rc geninfo_unexecuted_blocks=1 00:07:01.244 00:07:01.244 ' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.244 --rc genhtml_branch_coverage=1 00:07:01.244 --rc genhtml_function_coverage=1 00:07:01.244 --rc genhtml_legend=1 00:07:01.244 --rc geninfo_all_blocks=1 00:07:01.244 --rc geninfo_unexecuted_blocks=1 00:07:01.244 00:07:01.244 ' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74123 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74123 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74123 ']' 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.244 18:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.244 18:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:01.244 [2024-12-13 18:01:35.539020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:01.244 [2024-12-13 18:01:35.539137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74123 ] 00:07:01.504 [2024-12-13 18:01:35.684417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.504 [2024-12-13 18:01:35.703478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.075 18:01:36 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.075 18:01:36 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:02.075 18:01:36 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:02.075 18:01:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:02.075 18:01:36 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:02.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.597 Waiting for block devices as requested 00:07:02.597 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.597 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.597 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.597 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:07.895 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:07.895 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:07.895 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:07.896 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:07.896 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:07.896 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:07.896 18:01:41 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:07.896 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:07.896 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:07.896 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:07.896 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:07.896 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:07.896 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:07.896 18:01:41 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:07.896 BYT; 00:07:07.896 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:07.896 BYT; 00:07:07.896 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.896 18:01:42 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.896 18:01:42 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:08.829 The operation has completed successfully. 00:07:08.829 18:01:43 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:09.763 The operation has completed successfully. 00:07:09.763 18:01:44 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:10.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.586 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.586 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.586 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.844 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.844 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:10.844 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.844 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.844 [] 00:07:10.844 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.844 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:10.844 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:10.844 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:10.844 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.844 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:10.844 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.844 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.101 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:11.359 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.359 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:11.359 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:11.360 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "bd19710e-7ab7-440d-bc4f-017df6b9c918"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "bd19710e-7ab7-440d-bc4f-017df6b9c918",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3ac8a8e6-2589-4352-a0b8-9e38cd6595e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3ac8a8e6-2589-4352-a0b8-9e38cd6595e5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d7b02891-366a-4743-b264-a246faa559ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d7b02891-366a-4743-b264-a246faa559ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c90b0f6d-6f5c-4d15-94f9-694b31e38ba5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c90b0f6d-6f5c-4d15-94f9-694b31e38ba5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b482a680-33d9-4371-a389-5c6f663cd47c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b482a680-33d9-4371-a389-5c6f663cd47c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:11.360 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:11.360 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:11.360 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:11.360 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 74123 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74123 ']' 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74123 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74123 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.360 killing process with pid 74123 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74123' 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74123 00:07:11.360 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74123 00:07:11.618 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:11.618 18:01:45 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.618 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:11.618 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.618 18:01:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.618 ************************************ 00:07:11.618 START TEST bdev_hello_world 00:07:11.618 ************************************ 00:07:11.618 18:01:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.618 [2024-12-13 18:01:45.860510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:11.618 [2024-12-13 18:01:45.860629] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74730 ] 00:07:11.876 [2024-12-13 18:01:46.002743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.876 [2024-12-13 18:01:46.019442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.134 [2024-12-13 18:01:46.377288] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:12.134 [2024-12-13 18:01:46.377329] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:12.134 [2024-12-13 18:01:46.377347] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:12.134 [2024-12-13 18:01:46.378931] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:12.134 [2024-12-13 18:01:46.379371] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:12.134 [2024-12-13 18:01:46.379399] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:12.134 [2024-12-13 18:01:46.379662] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:12.134 00:07:12.134 [2024-12-13 18:01:46.379686] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:12.134 00:07:12.134 real 0m0.695s 00:07:12.134 user 0m0.479s 00:07:12.134 sys 0m0.114s 00:07:12.134 18:01:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.134 18:01:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:12.134 ************************************ 00:07:12.134 END TEST bdev_hello_world 00:07:12.134 ************************************ 00:07:12.392 18:01:46 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:12.392 18:01:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.392 18:01:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.392 18:01:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.392 ************************************ 00:07:12.392 START TEST bdev_bounds 00:07:12.392 ************************************ 00:07:12.392 18:01:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:12.392 18:01:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74761 00:07:12.392 18:01:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.392 Process bdevio pid: 74761 00:07:12.392 18:01:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74761' 00:07:12.392 18:01:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74761 00:07:12.392 18:01:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74761 ']' 00:07:12.393 18:01:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.393 18:01:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.393 18:01:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.393 18:01:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.393 18:01:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.393 18:01:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.393 [2024-12-13 18:01:46.600164] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:12.393 [2024-12-13 18:01:46.600296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74761 ] 00:07:12.393 [2024-12-13 18:01:46.739795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.393 [2024-12-13 18:01:46.758786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.393 [2024-12-13 18:01:46.759416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.393 [2024-12-13 18:01:46.759504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.327 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.327 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:13.327 18:01:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:13.327 I/O targets: 00:07:13.327 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:13.327 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:13.327 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:13.327 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.327 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.327 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.327 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:13.327 00:07:13.327 00:07:13.327 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.327 http://cunit.sourceforge.net/ 00:07:13.327 00:07:13.327 00:07:13.327 Suite: bdevio tests on: Nvme3n1 00:07:13.327 Test: blockdev write read block ...passed 00:07:13.327 Test: blockdev write zeroes read block ...passed 00:07:13.327 Test: blockdev write zeroes read no split ...passed 00:07:13.327 Test: blockdev write zeroes read split ...passed 00:07:13.327 Test: blockdev write zeroes read split partial ...passed 00:07:13.327 Test: blockdev reset ...[2024-12-13 18:01:47.552038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:13.327 [2024-12-13 18:01:47.553939] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:13.327 passed 00:07:13.327 Test: blockdev write read 8 blocks ...passed 00:07:13.327 Test: blockdev write read size > 128k ...passed 00:07:13.327 Test: blockdev write read invalid size ...passed 00:07:13.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.327 Test: blockdev write read max offset ...passed 00:07:13.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.327 Test: blockdev writev readv 8 blocks ...passed 00:07:13.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.327 Test: blockdev writev readv block ...passed 00:07:13.327 Test: blockdev writev readv size > 128k ...passed 00:07:13.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.327 Test: blockdev comparev and writev ...[2024-12-13 18:01:47.558323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a0e000 len:0x1000 00:07:13.327 [2024-12-13 18:01:47.558378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.327 passed 00:07:13.327 Test: blockdev nvme passthru rw ...passed 00:07:13.327 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.328 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:47.558844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.328 [2024-12-13 18:01:47.558869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.328 passed 00:07:13.328 Test: blockdev copy ...passed 00:07:13.328 Suite: bdevio tests on: Nvme2n3 00:07:13.328 Test: blockdev write read block ...passed 00:07:13.328 Test: blockdev write zeroes read block ...passed 00:07:13.328 Test: blockdev write zeroes read no split ...passed 00:07:13.328 Test: blockdev write zeroes read split ...passed 00:07:13.328 Test: blockdev write zeroes read split partial ...passed 00:07:13.328 Test: blockdev reset ...[2024-12-13 18:01:47.573551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.328 [2024-12-13 18:01:47.575148] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.328 passed 00:07:13.328 Test: blockdev write read 8 blocks ...passed 00:07:13.328 Test: blockdev write read size > 128k ...passed 00:07:13.328 Test: blockdev write read invalid size ...passed 00:07:13.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.328 Test: blockdev write read max offset ...passed 00:07:13.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.328 Test: blockdev writev readv 8 blocks ...passed 00:07:13.328 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.328 Test: blockdev writev readv block ...passed 00:07:13.328 Test: blockdev writev readv size > 128k ...passed 00:07:13.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.328 Test: blockdev comparev and writev ...[2024-12-13 18:01:47.579364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a08000 len:0x1000 00:07:13.328 passed 00:07:13.328 Test: blockdev nvme passthru rw ...[2024-12-13 18:01:47.579403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.328 passed 00:07:13.328 Test: blockdev nvme passthru vendor specific ...[2024-12-13 18:01:47.579891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.328 passed 00:07:13.328 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:47.579918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.328 passed 00:07:13.328 Test: blockdev copy ...passed 00:07:13.328 Suite: bdevio tests on: Nvme2n2 00:07:13.328 Test: blockdev write read block ...passed 00:07:13.328 Test: blockdev write zeroes read block ...passed 00:07:13.328 Test: blockdev write zeroes read no split ...passed 00:07:13.328 Test: blockdev write zeroes read split ...passed 00:07:13.328 Test: blockdev write zeroes read split partial ...passed 00:07:13.328 Test: blockdev reset ...[2024-12-13 18:01:47.594708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.328 [2024-12-13 18:01:47.596178] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.328 passed 00:07:13.328 Test: blockdev write read 8 blocks ...passed 00:07:13.328 Test: blockdev write read size > 128k ...passed 00:07:13.328 Test: blockdev write read invalid size ...passed 00:07:13.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.328 Test: blockdev write read max offset ...passed 00:07:13.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.328 Test: blockdev writev readv 8 blocks ...passed 00:07:13.328 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.328 Test: blockdev writev readv block ...passed 00:07:13.328 Test: blockdev writev readv size > 128k ...passed 00:07:13.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.328 Test: blockdev comparev and writev ...[2024-12-13 18:01:47.600274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a02000 len:0x1000 00:07:13.328 [2024-12-13 18:01:47.600310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.328 passed 00:07:13.328 Test: blockdev nvme passthru rw ...passed 00:07:13.328 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.328 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:47.600763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.328 [2024-12-13 18:01:47.600781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.328 passed 00:07:13.328 Test: blockdev copy ...passed 00:07:13.328 Suite: bdevio tests on: Nvme2n1 00:07:13.328 Test: blockdev write read block ...passed 00:07:13.328 Test: blockdev write zeroes read block ...passed 00:07:13.328 Test: blockdev write zeroes read no split ...passed 00:07:13.328 Test: blockdev write zeroes read split ...passed 00:07:13.328 Test: blockdev write zeroes read split partial ...passed 00:07:13.328 Test: blockdev reset ...[2024-12-13 18:01:47.616268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:13.328 [2024-12-13 18:01:47.617826] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:13.328 passed 00:07:13.328 Test: blockdev write read 8 blocks ...passed 00:07:13.328 Test: blockdev write read size > 128k ...passed 00:07:13.328 Test: blockdev write read invalid size ...passed 00:07:13.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.328 Test: blockdev write read max offset ...passed 00:07:13.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.328 Test: blockdev writev readv 8 blocks ...passed 00:07:13.328 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.328 Test: blockdev writev readv block ...passed 00:07:13.328 Test: blockdev writev readv size > 128k ...passed 00:07:13.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.328 Test: blockdev comparev and writev ...[2024-12-13 18:01:47.621839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7404000 len:0x1000 00:07:13.328 [2024-12-13 18:01:47.621880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.328 passed 00:07:13.328 Test: blockdev nvme passthru rw ...passed 00:07:13.328 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.328 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:47.622397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.328 [2024-12-13 18:01:47.622424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.328 passed 00:07:13.328 Test: blockdev copy ...passed 00:07:13.328 Suite: bdevio tests on: Nvme1n1p2 00:07:13.328 Test: blockdev write read block ...passed 00:07:13.328 Test: blockdev write zeroes read block ...passed 00:07:13.328 Test: blockdev write zeroes read no split ...passed 00:07:13.328 Test: blockdev write zeroes read split ...passed 00:07:13.328 Test: blockdev write zeroes read split partial ...passed 00:07:13.328 Test: blockdev reset ...[2024-12-13 18:01:47.638052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:13.328 [2024-12-13 18:01:47.639324] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:13.329 passed 00:07:13.329 Test: blockdev write read 8 blocks ...passed 00:07:13.329 Test: blockdev write read size > 128k ...passed 00:07:13.329 Test: blockdev write read invalid size ...passed 00:07:13.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.329 Test: blockdev write read max offset ...passed 00:07:13.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.329 Test: blockdev writev readv 8 blocks ...passed 00:07:13.329 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.329 Test: blockdev writev readv block ...passed 00:07:13.329 Test: blockdev writev readv size > 128k ...passed 00:07:13.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.329 Test: blockdev comparev and writev ...[2024-12-13 18:01:47.643424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cfc3d000 len:0x1000 00:07:13.329 [2024-12-13 18:01:47.643464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.329 passed 00:07:13.329 Test: blockdev nvme passthru rw ...passed 00:07:13.329 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.329 Test: blockdev nvme admin passthru ...passed 00:07:13.329 Test: blockdev copy ...passed 00:07:13.329 Suite: bdevio tests on: Nvme1n1p1 00:07:13.329 Test: blockdev write read block ...passed 00:07:13.329 Test: blockdev write zeroes read block ...passed 00:07:13.329 Test: blockdev write zeroes read no split ...passed 00:07:13.329 Test: blockdev write zeroes read split ...passed 00:07:13.329 Test: blockdev write zeroes read split partial ...passed 00:07:13.329 Test: blockdev reset ...[2024-12-13 18:01:47.654682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:13.329 [2024-12-13 18:01:47.655901] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:13.329 passed 00:07:13.329 Test: blockdev write read 8 blocks ...passed 00:07:13.329 Test: blockdev write read size > 128k ...passed 00:07:13.329 Test: blockdev write read invalid size ...passed 00:07:13.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.329 Test: blockdev write read max offset ...passed 00:07:13.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.329 Test: blockdev writev readv 8 blocks ...passed 00:07:13.329 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.329 Test: blockdev writev readv block ...passed 00:07:13.329 Test: blockdev writev readv size > 128k ...passed 00:07:13.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.329 Test: blockdev comparev and writev ...[2024-12-13 18:01:47.659781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2cfc39000 len:0x1000 00:07:13.329 [2024-12-13 18:01:47.659816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.329 passed 00:07:13.329 Test: blockdev nvme passthru rw ...passed 00:07:13.329 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.329 Test: blockdev nvme admin passthru ...passed 00:07:13.329 Test: blockdev copy ...passed 00:07:13.329 Suite: bdevio tests on: Nvme0n1 00:07:13.329 Test: blockdev write read block ...passed 00:07:13.329 Test: blockdev write zeroes read block ...passed 00:07:13.329 Test: blockdev write zeroes read no split ...passed 00:07:13.329 Test: blockdev write zeroes read split ...passed 00:07:13.329 Test: blockdev write zeroes read split partial ...passed 00:07:13.329 Test: blockdev reset ...[2024-12-13 18:01:47.670747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:13.329 [2024-12-13 18:01:47.672202] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:13.329 passed 00:07:13.329 Test: blockdev write read 8 blocks ...passed 00:07:13.329 Test: blockdev write read size > 128k ...passed 00:07:13.329 Test: blockdev write read invalid size ...passed 00:07:13.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.329 Test: blockdev write read max offset ...passed 00:07:13.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.329 Test: blockdev writev readv 8 blocks ...passed 00:07:13.329 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.329 Test: blockdev writev readv block ...passed 00:07:13.329 Test: blockdev writev readv size > 128k ...passed 00:07:13.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.329 Test: blockdev comparev and writev ...passed 00:07:13.329 Test: blockdev nvme passthru rw ...[2024-12-13 18:01:47.675805] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:13.329 separate metadata which is not supported yet. 00:07:13.329 passed 00:07:13.329 Test: blockdev nvme passthru vendor specific ...[2024-12-13 18:01:47.676139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:13.329 passed 00:07:13.329 Test: blockdev nvme admin passthru ...[2024-12-13 18:01:47.676170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:13.329 passed 00:07:13.329 Test: blockdev copy ...passed 00:07:13.329 00:07:13.329 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.329 suites 7 7 n/a 0 0 00:07:13.329 tests 161 161 161 0 0 00:07:13.329 asserts 1025 1025 1025 0 n/a 00:07:13.329 00:07:13.329 Elapsed time = 0.329 seconds 00:07:13.329 0 00:07:13.330 18:01:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74761 00:07:13.330 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74761 ']' 00:07:13.330 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74761 00:07:13.330 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:13.330 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.330 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74761 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.588 killing process with pid 74761 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74761' 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74761 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74761 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:13.588 00:07:13.588 real 0m1.292s 00:07:13.588 user 0m3.402s 00:07:13.588 sys 0m0.233s 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.588 ************************************ 00:07:13.588 END TEST bdev_bounds 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:13.588 ************************************ 00:07:13.588 18:01:47 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.588 18:01:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.588 18:01:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.588 18:01:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.588 ************************************ 00:07:13.588 START TEST bdev_nbd 00:07:13.588 ************************************ 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74804 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74804 /var/tmp/spdk-nbd.sock 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74804 ']' 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.588 18:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:13.588 [2024-12-13 18:01:47.934724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:13.588 [2024-12-13 18:01:47.934815] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.847 [2024-12-13 18:01:48.072368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.847 [2024-12-13 18:01:48.089619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.413 18:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.413 18:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:14.413 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.413 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.671 18:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.671 1+0 records in 00:07:14.671 1+0 records out 00:07:14.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350429 s, 11.7 MB/s 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.671 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.930 1+0 records in 00:07:14.930 1+0 records out 00:07:14.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326685 s, 12.5 MB/s 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.930 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.189 1+0 records in 00:07:15.189 1+0 records out 00:07:15.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318782 s, 12.8 MB/s 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.189 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.447 1+0 records in 00:07:15.447 1+0 records out 00:07:15.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425234 s, 9.6 MB/s 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.447 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.705 1+0 records in 00:07:15.705 1+0 records out 00:07:15.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500787 s, 8.2 MB/s 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.705 18:01:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.965 1+0 records in 00:07:15.965 1+0 records out 00:07:15.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527449 s, 7.8 MB/s 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.965 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.224 1+0 records in 00:07:16.224 1+0 records out 00:07:16.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065707 s, 6.2 MB/s 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.224 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.225 18:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.225 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.225 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.225 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd0", 00:07:16.482 "bdev_name": "Nvme0n1" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd1", 00:07:16.482 "bdev_name": "Nvme1n1p1" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd2", 00:07:16.482 "bdev_name": "Nvme1n1p2" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd3", 00:07:16.482 "bdev_name": "Nvme2n1" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd4", 00:07:16.482 "bdev_name": "Nvme2n2" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd5", 00:07:16.482 "bdev_name": "Nvme2n3" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd6", 00:07:16.482 "bdev_name": "Nvme3n1" 00:07:16.482 } 00:07:16.482 ]' 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd0", 00:07:16.482 "bdev_name": "Nvme0n1" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd1", 00:07:16.482 "bdev_name": "Nvme1n1p1" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd2", 00:07:16.482 "bdev_name": "Nvme1n1p2" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd3", 00:07:16.482 "bdev_name": "Nvme2n1" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd4", 00:07:16.482 "bdev_name": "Nvme2n2" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd5", 00:07:16.482 "bdev_name": "Nvme2n3" 00:07:16.482 }, 00:07:16.482 { 00:07:16.482 "nbd_device": "/dev/nbd6", 00:07:16.482 "bdev_name": "Nvme3n1" 00:07:16.482 } 00:07:16.482 ]' 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.482 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.483 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.741 18:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.741 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.999 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.257 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.515 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.773 18:01:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.030 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:18.031 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:18.289 /dev/nbd0 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.289 1+0 records in 00:07:18.289 1+0 records out 00:07:18.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493686 s, 8.3 MB/s 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.289 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:18.547 /dev/nbd1 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.547 1+0 records in 00:07:18.547 1+0 records out 00:07:18.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348858 s, 11.7 MB/s 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.547 18:01:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:18.804 /dev/nbd10 00:07:18.804 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:18.804 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:18.804 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:18.804 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.804 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.804 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.804 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.805 1+0 records in 00:07:18.805 1+0 records out 00:07:18.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281959 s, 14.5 MB/s 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.805 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:19.062 /dev/nbd11 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.062 1+0 records in 00:07:19.062 1+0 records out 00:07:19.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288477 s, 14.2 MB/s 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.062 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:19.320 /dev/nbd12 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.320 1+0 records in 00:07:19.320 1+0 records out 00:07:19.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403711 s, 10.1 MB/s 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.320 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:19.628 /dev/nbd13 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.628 1+0 records in 00:07:19.628 1+0 records out 00:07:19.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429353 s, 9.5 MB/s 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.628 18:01:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:19.886 /dev/nbd14 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.886 1+0 records in 00:07:19.886 1+0 records out 00:07:19.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531038 s, 7.7 MB/s 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd0", 00:07:19.886 "bdev_name": "Nvme0n1" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd1", 00:07:19.886 "bdev_name": "Nvme1n1p1" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd10", 00:07:19.886 "bdev_name": "Nvme1n1p2" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd11", 00:07:19.886 "bdev_name": "Nvme2n1" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd12", 00:07:19.886 "bdev_name": "Nvme2n2" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd13", 00:07:19.886 "bdev_name": "Nvme2n3" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd14", 00:07:19.886 "bdev_name": "Nvme3n1" 00:07:19.886 } 00:07:19.886 ]' 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd0", 00:07:19.886 "bdev_name": "Nvme0n1" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd1", 00:07:19.886 "bdev_name": "Nvme1n1p1" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd10", 00:07:19.886 "bdev_name": "Nvme1n1p2" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd11", 00:07:19.886 "bdev_name": "Nvme2n1" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd12", 00:07:19.886 "bdev_name": "Nvme2n2" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd13", 00:07:19.886 "bdev_name": "Nvme2n3" 00:07:19.886 }, 00:07:19.886 { 00:07:19.886 "nbd_device": "/dev/nbd14", 00:07:19.886 "bdev_name": "Nvme3n1" 00:07:19.886 } 00:07:19.886 ]' 00:07:19.886 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.144 /dev/nbd1 00:07:20.144 /dev/nbd10 00:07:20.144 /dev/nbd11 00:07:20.144 /dev/nbd12 00:07:20.144 /dev/nbd13 00:07:20.144 /dev/nbd14' 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.144 /dev/nbd1 00:07:20.144 /dev/nbd10 00:07:20.144 /dev/nbd11 00:07:20.144 /dev/nbd12 00:07:20.144 /dev/nbd13 00:07:20.144 /dev/nbd14' 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:20.144 256+0 records in 00:07:20.144 256+0 records out 00:07:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455626 s, 230 MB/s 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.144 256+0 records in 00:07:20.144 256+0 records out 00:07:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0700488 s, 15.0 MB/s 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.144 256+0 records in 00:07:20.144 256+0 records out 00:07:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0730133 s, 14.4 MB/s 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.144 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:20.402 256+0 records in 00:07:20.402 256+0 records out 00:07:20.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0810427 s, 12.9 MB/s 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:20.402 256+0 records in 00:07:20.402 256+0 records out 00:07:20.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0738941 s, 14.2 MB/s 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:20.402 256+0 records in 00:07:20.402 256+0 records out 00:07:20.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0719067 s, 14.6 MB/s 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:20.402 256+0 records in 00:07:20.402 256+0 records out 00:07:20.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0715084 s, 14.7 MB/s 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.402 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:20.660 256+0 records in 00:07:20.660 256+0 records out 00:07:20.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0726017 s, 14.4 MB/s 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.660 18:01:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.917 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.174 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.432 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.691 18:01:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:21.950 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:21.950 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:21.950 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:21.950 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.950 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.950 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:21.950 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.951 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:22.211 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:22.471 malloc_lvol_verify 00:07:22.471 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:22.731 057faeb5-ff70-4cbf-9019-e2768d55cf83 00:07:22.731 18:01:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:22.992 b8e93897-ffc6-4eb6-8b4e-047fa46a5ca5 00:07:22.992 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:22.992 /dev/nbd0 00:07:22.992 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:22.992 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:22.992 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:22.992 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:22.992 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:22.992 mke2fs 1.47.0 (5-Feb-2023) 00:07:22.992 Discarding device blocks: 0/4096 done 00:07:22.992 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:22.992 00:07:22.992 Allocating group tables: 0/1 done 00:07:22.992 Writing inode tables: 0/1 done 00:07:22.992 Creating journal (1024 blocks): done 00:07:23.251 Writing superblocks and filesystem accounting information: 0/1 done 00:07:23.251 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74804 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74804 ']' 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74804 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74804 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.251 killing process with pid 74804 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74804' 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74804 00:07:23.251 18:01:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74804 00:07:24.633 18:01:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:24.633 00:07:24.633 real 0m10.705s 00:07:24.633 user 0m15.066s 00:07:24.633 sys 0m3.639s 00:07:24.633 ************************************ 00:07:24.633 END TEST bdev_nbd 00:07:24.633 ************************************ 00:07:24.633 18:01:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.634 18:01:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:24.634 18:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:24.634 18:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:24.634 18:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:24.634 skipping fio tests on NVMe due to multi-ns failures. 00:07:24.634 18:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:24.634 18:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:24.634 18:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:24.634 18:01:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:24.634 18:01:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.634 18:01:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.634 ************************************ 00:07:24.634 START TEST bdev_verify 00:07:24.634 ************************************ 00:07:24.634 18:01:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:24.634 [2024-12-13 18:01:58.679582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:24.634 [2024-12-13 18:01:58.679699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75220 ] 00:07:24.634 [2024-12-13 18:01:58.820013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.634 [2024-12-13 18:01:58.837419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.634 [2024-12-13 18:01:58.837437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.893 Running I/O for 5 seconds... 00:07:27.247 24448.00 IOPS, 95.50 MiB/s [2024-12-13T18:02:02.565Z] 24736.00 IOPS, 96.62 MiB/s [2024-12-13T18:02:03.501Z] 24874.67 IOPS, 97.17 MiB/s [2024-12-13T18:02:04.434Z] 24288.00 IOPS, 94.88 MiB/s [2024-12-13T18:02:04.434Z] 23526.40 IOPS, 91.90 MiB/s 00:07:30.057 Latency(us) 00:07:30.057 [2024-12-13T18:02:04.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.057 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x0 length 0xbd0bd 00:07:30.057 Nvme0n1 : 5.06 1643.87 6.42 0.00 0.00 77559.60 16031.11 79853.10 00:07:30.057 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:30.057 Nvme0n1 : 5.06 1668.14 6.52 0.00 0.00 76553.78 12855.14 77433.30 00:07:30.057 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x0 length 0x4ff80 00:07:30.057 Nvme1n1p1 : 5.06 1643.34 6.42 0.00 0.00 77419.12 17946.78 72997.02 00:07:30.057 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:30.057 Nvme1n1p1 : 5.07 1666.82 6.51 0.00 0.00 76486.56 15426.17 74610.22 00:07:30.057 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x0 length 0x4ff7f 00:07:30.057 Nvme1n1p2 : 5.06 1642.83 6.42 0.00 0.00 77311.04 18350.08 70980.53 00:07:30.057 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:30.057 Nvme1n1p2 : 5.07 1665.39 6.51 0.00 0.00 76379.12 14922.04 71383.83 00:07:30.057 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x0 length 0x80000 00:07:30.057 Nvme2n1 : 5.07 1641.56 6.41 0.00 0.00 77199.74 19862.45 65737.65 00:07:30.057 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x80000 length 0x80000 00:07:30.057 Nvme2n1 : 5.07 1664.94 6.50 0.00 0.00 76255.51 14216.27 67754.14 00:07:30.057 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x0 length 0x80000 00:07:30.057 Nvme2n2 : 5.08 1648.91 6.44 0.00 0.00 76777.48 5847.83 66140.95 00:07:30.057 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x80000 length 0x80000 00:07:30.057 Nvme2n2 : 5.08 1664.48 6.50 0.00 0.00 76132.16 14317.10 68964.04 00:07:30.057 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x0 length 0x80000 00:07:30.057 Nvme2n3 : 5.09 1648.48 6.44 0.00 0.00 76639.16 6150.30 69770.63 00:07:30.057 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x80000 length 0x80000 00:07:30.057 Nvme2n3 : 5.08 1664.04 6.50 0.00 0.00 75998.27 14115.45 73400.32 00:07:30.057 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x0 length 0x20000 00:07:30.057 Nvme3n1 : 5.11 1654.82 6.46 0.00 0.00 76262.91 9729.58 74610.22 00:07:30.057 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:30.057 Verification LBA range: start 0x20000 length 0x20000 00:07:30.057 Nvme3n1 : 5.08 1663.57 6.50 0.00 0.00 75870.55 10233.70 77030.01 00:07:30.057 [2024-12-13T18:02:04.434Z] =================================================================================================================== 00:07:30.057 [2024-12-13T18:02:04.434Z] Total : 23181.19 90.55 0.00 0.00 76628.48 5847.83 79853.10 00:07:30.999 00:07:30.999 real 0m6.409s 00:07:30.999 user 0m11.789s 00:07:30.999 sys 0m0.182s 00:07:30.999 ************************************ 00:07:30.999 END TEST bdev_verify 00:07:30.999 ************************************ 00:07:30.999 18:02:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.999 18:02:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 18:02:05 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.999 18:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:30.999 18:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.999 18:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.999 ************************************ 00:07:30.999 START TEST bdev_verify_big_io 00:07:30.999 ************************************ 00:07:30.999 18:02:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.999 [2024-12-13 18:02:05.136786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:30.999 [2024-12-13 18:02:05.136895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75307 ] 00:07:30.999 [2024-12-13 18:02:05.284594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.999 [2024-12-13 18:02:05.305690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.999 [2024-12-13 18:02:05.305730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.572 Running I/O for 5 seconds... 00:07:37.418 321.00 IOPS, 20.06 MiB/s [2024-12-13T18:02:11.795Z] 2536.00 IOPS, 158.50 MiB/s [2024-12-13T18:02:12.057Z] 3101.67 IOPS, 193.85 MiB/s 00:07:37.680 Latency(us) 00:07:37.680 [2024-12-13T18:02:12.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.680 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x0 length 0xbd0b 00:07:37.680 Nvme0n1 : 5.90 70.53 4.41 0.00 0.00 1715308.34 30852.33 2193943.63 00:07:37.680 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:37.680 Nvme0n1 : 5.73 117.54 7.35 0.00 0.00 1037962.22 23189.66 1167952.34 00:07:37.680 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x0 length 0x4ff8 00:07:37.680 Nvme1n1p1 : 5.90 108.99 6.81 0.00 0.00 1091536.69 95985.03 1103424.59 00:07:37.680 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:37.680 Nvme1n1p1 : 5.83 119.99 7.50 0.00 0.00 984096.13 75013.51 1045349.61 00:07:37.680 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x0 length 0x4ff7 00:07:37.680 Nvme1n1p2 : 5.83 109.77 6.86 0.00 0.00 1062804.24 130668.70 1006632.96 00:07:37.680 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:37.680 Nvme1n1p2 : 5.90 126.35 7.90 0.00 0.00 920057.23 97194.93 1045349.61 00:07:37.680 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x0 length 0x8000 00:07:37.680 Nvme2n1 : 5.90 113.68 7.11 0.00 0.00 1001275.88 69367.34 1103424.59 00:07:37.680 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x8000 length 0x8000 00:07:37.680 Nvme2n1 : 5.90 123.79 7.74 0.00 0.00 905789.84 97194.93 987274.63 00:07:37.680 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x0 length 0x8000 00:07:37.680 Nvme2n2 : 5.94 118.53 7.41 0.00 0.00 937933.73 34683.67 1051802.39 00:07:37.680 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x8000 length 0x8000 00:07:37.680 Nvme2n2 : 5.95 126.06 7.88 0.00 0.00 876948.04 49000.76 2051982.57 00:07:37.680 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x0 length 0x8000 00:07:37.680 Nvme2n3 : 5.95 123.50 7.72 0.00 0.00 876910.47 9225.45 1090519.04 00:07:37.680 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x8000 length 0x8000 00:07:37.680 Nvme2n3 : 6.03 135.38 8.46 0.00 0.00 795524.75 29239.14 2077793.67 00:07:37.680 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x0 length 0x2000 00:07:37.680 Nvme3n1 : 6.02 143.74 8.98 0.00 0.00 733364.57 1323.32 1122782.92 00:07:37.680 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.680 Verification LBA range: start 0x2000 length 0x2000 00:07:37.680 Nvme3n1 : 6.05 150.49 9.41 0.00 0.00 696520.23 1461.96 2103604.78 00:07:37.680 [2024-12-13T18:02:12.057Z] =================================================================================================================== 00:07:37.680 [2024-12-13T18:02:12.057Z] Total : 1688.36 105.52 0.00 0.00 938307.57 1323.32 2193943.63 00:07:39.151 00:07:39.151 real 0m8.207s 00:07:39.151 user 0m14.782s 00:07:39.151 sys 0m0.224s 00:07:39.151 ************************************ 00:07:39.151 END TEST bdev_verify_big_io 00:07:39.151 ************************************ 00:07:39.151 18:02:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.151 18:02:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:39.151 18:02:13 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.151 18:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:39.151 18:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.151 18:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.151 ************************************ 00:07:39.151 START TEST bdev_write_zeroes 00:07:39.151 ************************************ 00:07:39.151 18:02:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.151 [2024-12-13 18:02:13.375357] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:39.151 [2024-12-13 18:02:13.375470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75405 ] 00:07:39.151 [2024-12-13 18:02:13.515679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.412 [2024-12-13 18:02:13.531997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.673 Running I/O for 1 seconds... 00:07:40.614 64064.00 IOPS, 250.25 MiB/s 00:07:40.614 Latency(us) 00:07:40.614 [2024-12-13T18:02:14.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.614 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.614 Nvme0n1 : 1.02 9124.43 35.64 0.00 0.00 13999.17 5318.50 28835.84 00:07:40.614 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.614 Nvme1n1p1 : 1.03 9113.23 35.60 0.00 0.00 13999.73 9679.16 23290.49 00:07:40.614 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.614 Nvme1n1p2 : 1.03 9102.16 35.56 0.00 0.00 13939.15 10233.70 23391.31 00:07:40.614 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.614 Nvme2n1 : 1.03 9091.94 35.52 0.00 0.00 13935.44 10485.76 23592.96 00:07:40.614 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.614 Nvme2n2 : 1.03 9081.71 35.48 0.00 0.00 13916.96 9981.64 23895.43 00:07:40.614 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.614 Nvme2n3 : 1.03 9071.57 35.44 0.00 0.00 13913.52 9931.22 24197.91 00:07:40.614 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.614 Nvme3n1 : 1.03 9061.44 35.40 0.00 0.00 13910.53 9981.64 23492.14 00:07:40.614 [2024-12-13T18:02:14.991Z] =================================================================================================================== 00:07:40.614 [2024-12-13T18:02:14.991Z] Total : 63646.48 248.62 0.00 0.00 13944.93 5318.50 28835.84 00:07:40.883 00:07:40.883 real 0m1.781s 00:07:40.883 user 0m1.535s 00:07:40.883 sys 0m0.135s 00:07:40.883 18:02:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.883 18:02:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:40.883 ************************************ 00:07:40.883 END TEST bdev_write_zeroes 00:07:40.883 ************************************ 00:07:40.883 18:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.883 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.883 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.883 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.883 ************************************ 00:07:40.883 START TEST bdev_json_nonenclosed 00:07:40.883 ************************************ 00:07:40.883 18:02:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.883 [2024-12-13 18:02:15.198210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:40.883 [2024-12-13 18:02:15.198353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75436 ] 00:07:41.144 [2024-12-13 18:02:15.347211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.144 [2024-12-13 18:02:15.365140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.144 [2024-12-13 18:02:15.365210] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:41.144 [2024-12-13 18:02:15.365223] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.144 [2024-12-13 18:02:15.365235] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.144 00:07:41.144 real 0m0.284s 00:07:41.144 user 0m0.108s 00:07:41.144 sys 0m0.073s 00:07:41.144 18:02:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.144 18:02:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:41.144 ************************************ 00:07:41.144 END TEST bdev_json_nonenclosed 00:07:41.144 ************************************ 00:07:41.144 18:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.144 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:41.144 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.144 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.144 ************************************ 00:07:41.144 START TEST bdev_json_nonarray 00:07:41.144 ************************************ 00:07:41.144 18:02:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.405 [2024-12-13 18:02:15.522882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:41.405 [2024-12-13 18:02:15.522982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75467 ] 00:07:41.405 [2024-12-13 18:02:15.668125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.405 [2024-12-13 18:02:15.686128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.405 [2024-12-13 18:02:15.686209] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:41.405 [2024-12-13 18:02:15.686226] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.405 [2024-12-13 18:02:15.686239] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.405 00:07:41.405 real 0m0.276s 00:07:41.405 user 0m0.105s 00:07:41.405 sys 0m0.068s 00:07:41.405 18:02:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.405 18:02:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:41.405 ************************************ 00:07:41.405 END TEST bdev_json_nonarray 00:07:41.405 ************************************ 00:07:41.405 18:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:41.405 18:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:41.405 18:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:41.405 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.405 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.405 18:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.405 ************************************ 00:07:41.405 START TEST bdev_gpt_uuid 00:07:41.405 ************************************ 00:07:41.405 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:41.405 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:41.405 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:41.405 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75487 00:07:41.405 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:41.666 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75487 00:07:41.666 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75487 ']' 00:07:41.666 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.666 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.666 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.666 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.666 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:41.667 18:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:41.667 [2024-12-13 18:02:15.850295] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:41.667 [2024-12-13 18:02:15.850412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75487 ] 00:07:41.667 [2024-12-13 18:02:15.995663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.667 [2024-12-13 18:02:16.013680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.612 Some configs were skipped because the RPC state that can call them passed over. 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.612 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.874 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.874 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:42.874 { 00:07:42.874 "name": "Nvme1n1p1", 00:07:42.874 "aliases": [ 00:07:42.874 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:42.874 ], 00:07:42.874 "product_name": "GPT Disk", 00:07:42.874 "block_size": 4096, 00:07:42.874 "num_blocks": 655104, 00:07:42.874 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:42.874 "assigned_rate_limits": { 00:07:42.874 "rw_ios_per_sec": 0, 00:07:42.874 "rw_mbytes_per_sec": 0, 00:07:42.874 "r_mbytes_per_sec": 0, 00:07:42.874 "w_mbytes_per_sec": 0 00:07:42.874 }, 00:07:42.874 "claimed": false, 00:07:42.874 "zoned": false, 00:07:42.874 "supported_io_types": { 00:07:42.874 "read": true, 00:07:42.874 "write": true, 00:07:42.874 "unmap": true, 00:07:42.874 "flush": true, 00:07:42.874 "reset": true, 00:07:42.874 "nvme_admin": false, 00:07:42.874 "nvme_io": false, 00:07:42.874 "nvme_io_md": false, 00:07:42.874 "write_zeroes": true, 00:07:42.874 "zcopy": false, 00:07:42.874 "get_zone_info": false, 00:07:42.874 "zone_management": false, 00:07:42.874 "zone_append": false, 00:07:42.874 "compare": true, 00:07:42.874 "compare_and_write": false, 00:07:42.874 "abort": true, 00:07:42.874 "seek_hole": false, 00:07:42.874 "seek_data": false, 00:07:42.874 "copy": true, 00:07:42.874 "nvme_iov_md": false 00:07:42.874 }, 00:07:42.874 "driver_specific": { 00:07:42.874 "gpt": { 00:07:42.874 "base_bdev": "Nvme1n1", 00:07:42.874 "offset_blocks": 256, 00:07:42.874 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:42.874 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:42.874 "partition_name": "SPDK_TEST_first" 00:07:42.874 } 00:07:42.874 } 00:07:42.874 } 00:07:42.874 ]' 00:07:42.874 18:02:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:42.874 { 00:07:42.874 "name": "Nvme1n1p2", 00:07:42.874 "aliases": [ 00:07:42.874 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:42.874 ], 00:07:42.874 "product_name": "GPT Disk", 00:07:42.874 "block_size": 4096, 00:07:42.874 "num_blocks": 655103, 00:07:42.874 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:42.874 "assigned_rate_limits": { 00:07:42.874 "rw_ios_per_sec": 0, 00:07:42.874 "rw_mbytes_per_sec": 0, 00:07:42.874 "r_mbytes_per_sec": 0, 00:07:42.874 "w_mbytes_per_sec": 0 00:07:42.874 }, 00:07:42.874 "claimed": false, 00:07:42.874 "zoned": false, 00:07:42.874 "supported_io_types": { 00:07:42.874 "read": true, 00:07:42.874 "write": true, 00:07:42.874 "unmap": true, 00:07:42.874 "flush": true, 00:07:42.874 "reset": true, 00:07:42.874 "nvme_admin": false, 00:07:42.874 "nvme_io": false, 00:07:42.874 "nvme_io_md": false, 00:07:42.874 "write_zeroes": true, 00:07:42.874 "zcopy": false, 00:07:42.874 "get_zone_info": false, 00:07:42.874 "zone_management": false, 00:07:42.874 "zone_append": false, 00:07:42.874 "compare": true, 00:07:42.874 "compare_and_write": false, 00:07:42.874 "abort": true, 00:07:42.874 "seek_hole": false, 00:07:42.874 "seek_data": false, 00:07:42.874 "copy": true, 00:07:42.874 "nvme_iov_md": false 00:07:42.874 }, 00:07:42.874 "driver_specific": { 00:07:42.874 "gpt": { 00:07:42.874 "base_bdev": "Nvme1n1", 00:07:42.874 "offset_blocks": 655360, 00:07:42.874 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:42.874 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:42.874 "partition_name": "SPDK_TEST_second" 00:07:42.874 } 00:07:42.874 } 00:07:42.874 } 00:07:42.874 ]' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 75487 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75487 ']' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75487 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75487 00:07:42.874 killing process with pid 75487 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75487' 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75487 00:07:42.874 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75487 00:07:43.136 00:07:43.136 real 0m1.701s 00:07:43.136 user 0m1.869s 00:07:43.136 sys 0m0.302s 00:07:43.136 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.136 18:02:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.136 ************************************ 00:07:43.136 END TEST bdev_gpt_uuid 00:07:43.136 ************************************ 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:43.398 18:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:43.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.660 Waiting for block devices as requested 00:07:43.660 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.922 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.922 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.922 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.199 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:49.199 18:02:23 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:49.199 18:02:23 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:49.459 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:49.459 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:49.459 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:49.459 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:49.459 18:02:23 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:49.459 00:07:49.459 real 0m48.376s 00:07:49.459 user 1m0.586s 00:07:49.459 sys 0m7.337s 00:07:49.459 ************************************ 00:07:49.459 END TEST blockdev_nvme_gpt 00:07:49.459 ************************************ 00:07:49.459 18:02:23 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.459 18:02:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.459 18:02:23 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:49.459 18:02:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.459 18:02:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.459 18:02:23 -- common/autotest_common.sh@10 -- # set +x 00:07:49.459 ************************************ 00:07:49.459 START TEST nvme 00:07:49.459 ************************************ 00:07:49.459 18:02:23 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:49.459 * Looking for test storage... 00:07:49.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:49.459 18:02:23 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.459 18:02:23 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.459 18:02:23 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:49.764 18:02:23 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.764 18:02:23 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.764 18:02:23 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.764 18:02:23 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.764 18:02:23 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.764 18:02:23 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.764 18:02:23 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.764 18:02:23 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.764 18:02:23 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:49.764 18:02:23 nvme -- scripts/common.sh@345 -- # : 1 00:07:49.764 18:02:23 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.764 18:02:23 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.764 18:02:23 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:49.764 18:02:23 nvme -- scripts/common.sh@353 -- # local d=1 00:07:49.764 18:02:23 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.764 18:02:23 nvme -- scripts/common.sh@355 -- # echo 1 00:07:49.764 18:02:23 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.764 18:02:23 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@353 -- # local d=2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.764 18:02:23 nvme -- scripts/common.sh@355 -- # echo 2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.764 18:02:23 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.764 18:02:23 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.764 18:02:23 nvme -- scripts/common.sh@368 -- # return 0 00:07:49.764 18:02:23 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.764 18:02:23 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:49.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.764 --rc genhtml_branch_coverage=1 00:07:49.764 --rc genhtml_function_coverage=1 00:07:49.764 --rc genhtml_legend=1 00:07:49.764 --rc geninfo_all_blocks=1 00:07:49.764 --rc geninfo_unexecuted_blocks=1 00:07:49.764 00:07:49.764 ' 00:07:49.764 18:02:23 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:49.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.764 --rc genhtml_branch_coverage=1 00:07:49.764 --rc genhtml_function_coverage=1 00:07:49.764 --rc genhtml_legend=1 00:07:49.764 --rc geninfo_all_blocks=1 00:07:49.764 --rc geninfo_unexecuted_blocks=1 00:07:49.764 00:07:49.764 ' 00:07:49.764 18:02:23 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:49.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.764 --rc genhtml_branch_coverage=1 00:07:49.764 --rc genhtml_function_coverage=1 00:07:49.764 --rc genhtml_legend=1 00:07:49.764 --rc geninfo_all_blocks=1 00:07:49.764 --rc geninfo_unexecuted_blocks=1 00:07:49.764 00:07:49.764 ' 00:07:49.764 18:02:23 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:49.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.764 --rc genhtml_branch_coverage=1 00:07:49.764 --rc genhtml_function_coverage=1 00:07:49.764 --rc genhtml_legend=1 00:07:49.764 --rc geninfo_all_blocks=1 00:07:49.764 --rc geninfo_unexecuted_blocks=1 00:07:49.764 00:07:49.764 ' 00:07:49.764 18:02:23 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:50.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.305 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.564 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.564 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.564 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.564 18:02:24 nvme -- nvme/nvme.sh@79 -- # uname 00:07:50.564 18:02:24 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:50.564 18:02:24 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:50.564 18:02:24 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1075 -- # stubpid=76110 00:07:50.564 Waiting for stub to ready for secondary processes... 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76110 ]] 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:50.564 18:02:24 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:50.564 [2024-12-13 18:02:24.834602] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:50.564 [2024-12-13 18:02:24.834711] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:51.502 [2024-12-13 18:02:25.545365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.502 [2024-12-13 18:02:25.557505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.502 [2024-12-13 18:02:25.557660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.502 [2024-12-13 18:02:25.557752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.502 [2024-12-13 18:02:25.567428] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:51.502 [2024-12-13 18:02:25.567468] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.502 [2024-12-13 18:02:25.578149] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:51.502 [2024-12-13 18:02:25.578298] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:51.502 [2024-12-13 18:02:25.578955] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.502 [2024-12-13 18:02:25.579085] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:51.502 [2024-12-13 18:02:25.579158] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:51.502 [2024-12-13 18:02:25.579532] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.502 [2024-12-13 18:02:25.579649] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:51.502 [2024-12-13 18:02:25.579681] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:51.502 [2024-12-13 18:02:25.580193] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:51.502 [2024-12-13 18:02:25.580323] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:51.502 [2024-12-13 18:02:25.580361] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:51.502 [2024-12-13 18:02:25.580397] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:51.502 [2024-12-13 18:02:25.580430] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:51.502 18:02:25 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:51.502 done. 00:07:51.502 18:02:25 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:51.502 18:02:25 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:51.502 18:02:25 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:51.502 18:02:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.502 18:02:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.502 ************************************ 00:07:51.502 START TEST nvme_reset 00:07:51.502 ************************************ 00:07:51.502 18:02:25 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:51.762 Initializing NVMe Controllers 00:07:51.762 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:51.762 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:51.762 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:51.762 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:51.762 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:51.762 00:07:51.762 real 0m0.180s 00:07:51.762 user 0m0.059s 00:07:51.762 sys 0m0.077s 00:07:51.762 18:02:25 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.762 ************************************ 00:07:51.762 18:02:25 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:51.762 END TEST nvme_reset 00:07:51.762 ************************************ 00:07:51.762 18:02:26 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:51.762 18:02:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.762 18:02:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.762 18:02:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.762 ************************************ 00:07:51.762 START TEST nvme_identify 00:07:51.762 ************************************ 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:51.762 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:51.762 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:51.762 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:51.762 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:51.762 18:02:26 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:51.762 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:52.024 [2024-12-13 18:02:26.228272] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76131 terminated unexpected 00:07:52.024 ===================================================== 00:07:52.024 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.024 ===================================================== 00:07:52.024 Controller Capabilities/Features 00:07:52.024 ================================ 00:07:52.024 Vendor ID: 1b36 00:07:52.024 Subsystem Vendor ID: 1af4 00:07:52.024 Serial Number: 12340 00:07:52.024 Model Number: QEMU NVMe Ctrl 00:07:52.024 Firmware Version: 8.0.0 00:07:52.024 Recommended Arb Burst: 6 00:07:52.024 IEEE OUI Identifier: 00 54 52 00:07:52.024 Multi-path I/O 00:07:52.024 May have multiple subsystem ports: No 00:07:52.024 May have multiple controllers: No 00:07:52.024 Associated with SR-IOV VF: No 00:07:52.024 Max Data Transfer Size: 524288 00:07:52.024 Max Number of Namespaces: 256 00:07:52.024 Max Number of I/O Queues: 64 00:07:52.024 NVMe Specification Version (VS): 1.4 00:07:52.024 NVMe Specification Version (Identify): 1.4 00:07:52.024 Maximum Queue Entries: 2048 00:07:52.024 Contiguous Queues Required: Yes 00:07:52.024 Arbitration Mechanisms Supported 00:07:52.024 Weighted Round Robin: Not Supported 00:07:52.024 Vendor Specific: Not Supported 00:07:52.024 Reset Timeout: 7500 ms 00:07:52.024 Doorbell Stride: 4 bytes 00:07:52.024 NVM Subsystem Reset: Not Supported 00:07:52.024 Command Sets Supported 00:07:52.024 NVM Command Set: Supported 00:07:52.024 Boot Partition: Not Supported 00:07:52.024 Memory Page Size Minimum: 4096 bytes 00:07:52.024 Memory Page Size Maximum: 65536 bytes 00:07:52.024 Persistent Memory Region: Not Supported 00:07:52.024 Optional Asynchronous Events Supported 00:07:52.024 Namespace Attribute Notices: Supported 00:07:52.024 Firmware Activation Notices: Not Supported 00:07:52.024 ANA Change Notices: Not Supported 00:07:52.024 PLE Aggregate Log Change Notices: Not Supported 00:07:52.024 LBA Status Info Alert Notices: Not Supported 00:07:52.024 EGE Aggregate Log Change Notices: Not Supported 00:07:52.024 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.024 Zone Descriptor Change Notices: Not Supported 00:07:52.024 Discovery Log Change Notices: Not Supported 00:07:52.024 Controller Attributes 00:07:52.024 128-bit Host Identifier: Not Supported 00:07:52.024 Non-Operational Permissive Mode: Not Supported 00:07:52.024 NVM Sets: Not Supported 00:07:52.024 Read Recovery Levels: Not Supported 00:07:52.024 Endurance Groups: Not Supported 00:07:52.024 Predictable Latency Mode: Not Supported 00:07:52.024 Traffic Based Keep ALive: Not Supported 00:07:52.024 Namespace Granularity: Not Supported 00:07:52.024 SQ Associations: Not Supported 00:07:52.024 UUID List: Not Supported 00:07:52.024 Multi-Domain Subsystem: Not Supported 00:07:52.024 Fixed Capacity Management: Not Supported 00:07:52.024 Variable Capacity Management: Not Supported 00:07:52.024 Delete Endurance Group: Not Supported 00:07:52.024 Delete NVM Set: Not Supported 00:07:52.024 Extended LBA Formats Supported: Supported 00:07:52.024 Flexible Data Placement Supported: Not Supported 00:07:52.024 00:07:52.024 Controller Memory Buffer Support 00:07:52.024 ================================ 00:07:52.024 Supported: No 00:07:52.024 00:07:52.024 Persistent Memory Region Support 00:07:52.024 ================================ 00:07:52.024 Supported: No 00:07:52.024 00:07:52.024 Admin Command Set Attributes 00:07:52.024 ============================ 00:07:52.024 Security Send/Receive: Not Supported 00:07:52.024 Format NVM: Supported 00:07:52.024 Firmware Activate/Download: Not Supported 00:07:52.024 Namespace Management: Supported 00:07:52.024 Device Self-Test: Not Supported 00:07:52.024 Directives: Supported 00:07:52.024 NVMe-MI: Not Supported 00:07:52.024 Virtualization Management: Not Supported 00:07:52.024 Doorbell Buffer Config: Supported 00:07:52.024 Get LBA Status Capability: Not Supported 00:07:52.024 Command & Feature Lockdown Capability: Not Supported 00:07:52.024 Abort Command Limit: 4 00:07:52.024 Async Event Request Limit: 4 00:07:52.024 Number of Firmware Slots: N/A 00:07:52.024 Firmware Slot 1 Read-Only: N/A 00:07:52.024 Firmware Activation Without Reset: N/A 00:07:52.024 Multiple Update Detection Support: N/A 00:07:52.024 Firmware Update Granularity: No Information Provided 00:07:52.024 Per-Namespace SMART Log: Yes 00:07:52.024 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.024 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:52.024 Command Effects Log Page: Supported 00:07:52.024 Get Log Page Extended Data: Supported 00:07:52.024 Telemetry Log Pages: Not Supported 00:07:52.024 Persistent Event Log Pages: Not Supported 00:07:52.024 Supported Log Pages Log Page: May Support 00:07:52.024 Commands Supported & Effects Log Page: Not Supported 00:07:52.024 Feature Identifiers & Effects Log Page:May Support 00:07:52.024 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.024 Data Area 4 for Telemetry Log: Not Supported 00:07:52.024 Error Log Page Entries Supported: 1 00:07:52.024 Keep Alive: Not Supported 00:07:52.024 00:07:52.024 NVM Command Set Attributes 00:07:52.024 ========================== 00:07:52.024 Submission Queue Entry Size 00:07:52.024 Max: 64 00:07:52.024 Min: 64 00:07:52.024 Completion Queue Entry Size 00:07:52.024 Max: 16 00:07:52.024 Min: 16 00:07:52.024 Number of Namespaces: 256 00:07:52.024 Compare Command: Supported 00:07:52.024 Write Uncorrectable Command: Not Supported 00:07:52.024 Dataset Management Command: Supported 00:07:52.024 Write Zeroes Command: Supported 00:07:52.024 Set Features Save Field: Supported 00:07:52.024 Reservations: Not Supported 00:07:52.024 Timestamp: Supported 00:07:52.024 Copy: Supported 00:07:52.024 Volatile Write Cache: Present 00:07:52.024 Atomic Write Unit (Normal): 1 00:07:52.024 Atomic Write Unit (PFail): 1 00:07:52.024 Atomic Compare & Write Unit: 1 00:07:52.024 Fused Compare & Write: Not Supported 00:07:52.024 Scatter-Gather List 00:07:52.024 SGL Command Set: Supported 00:07:52.024 SGL Keyed: Not Supported 00:07:52.024 SGL Bit Bucket Descriptor: Not Supported 00:07:52.024 SGL Metadata Pointer: Not Supported 00:07:52.024 Oversized SGL: Not Supported 00:07:52.024 SGL Metadata Address: Not Supported 00:07:52.024 SGL Offset: Not Supported 00:07:52.024 Transport SGL Data Block: Not Supported 00:07:52.024 Replay Protected Memory Block: Not Supported 00:07:52.024 00:07:52.024 Firmware Slot Information 00:07:52.024 ========================= 00:07:52.024 Active slot: 1 00:07:52.024 Slot 1 Firmware Revision: 1.0 00:07:52.024 00:07:52.024 00:07:52.024 Commands Supported and Effects 00:07:52.024 ============================== 00:07:52.024 Admin Commands 00:07:52.024 -------------- 00:07:52.024 Delete I/O Submission Queue (00h): Supported 00:07:52.024 Create I/O Submission Queue (01h): Supported 00:07:52.024 Get Log Page (02h): Supported 00:07:52.024 Delete I/O Completion Queue (04h): Supported 00:07:52.024 Create I/O Completion Queue (05h): Supported 00:07:52.024 Identify (06h): Supported 00:07:52.024 Abort (08h): Supported 00:07:52.024 Set Features (09h): Supported 00:07:52.024 Get Features (0Ah): Supported 00:07:52.025 Asynchronous Event Request (0Ch): Supported 00:07:52.025 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.025 Directive Send (19h): Supported 00:07:52.025 Directive Receive (1Ah): Supported 00:07:52.025 Virtualization Management (1Ch): Supported 00:07:52.025 Doorbell Buffer Config (7Ch): Supported 00:07:52.025 Format NVM (80h): Supported LBA-Change 00:07:52.025 I/O Commands 00:07:52.025 ------------ 00:07:52.025 Flush (00h): Supported LBA-Change 00:07:52.025 Write (01h): Supported LBA-Change 00:07:52.025 Read (02h): Supported 00:07:52.025 Compare (05h): Supported 00:07:52.025 Write Zeroes (08h): Supported LBA-Change 00:07:52.025 Dataset Management (09h): Supported LBA-Change 00:07:52.025 Unknown (0Ch): Supported 00:07:52.025 Unknown (12h): Supported 00:07:52.025 Copy (19h): Supported LBA-Change 00:07:52.025 Unknown (1Dh): Supported LBA-Change 00:07:52.025 00:07:52.025 Error Log 00:07:52.025 ========= 00:07:52.025 00:07:52.025 Arbitration 00:07:52.025 =========== 00:07:52.025 Arbitration Burst: no limit 00:07:52.025 00:07:52.025 Power Management 00:07:52.025 ================ 00:07:52.025 Number of Power States: 1 00:07:52.025 Current Power State: Power State #0 00:07:52.025 Power State #0: 00:07:52.025 Max Power: 25.00 W 00:07:52.025 Non-Operational State: Operational 00:07:52.025 Entry Latency: 16 microseconds 00:07:52.025 Exit Latency: 4 microseconds 00:07:52.025 Relative Read Throughput: 0 00:07:52.025 Relative Read Latency: 0 00:07:52.025 Relative Write Throughput: 0 00:07:52.025 Relative Write Latency: 0 00:07:52.025 Idle Power: Not Reported 00:07:52.025 Active Power: Not Reported 00:07:52.025 Non-Operational Permissive Mode: Not Supported 00:07:52.025 00:07:52.025 Health Information 00:07:52.025 ================== 00:07:52.025 Critical Warnings: 00:07:52.025 Available Spare Space: OK 00:07:52.025 Temperature: OK 00:07:52.025 Device Reliability: OK 00:07:52.025 Read Only: No 00:07:52.025 Volatile Memory Backup: OK 00:07:52.025 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.025 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.025 Available Spare: 0% 00:07:52.025 Available Spare Threshold: 0% 00:07:52.025 Life Percentage Used: 0% 00:07:52.025 Data Units Read: 694 00:07:52.025 Data Units Written: 622 00:07:52.025 Host Read Commands: 39162 00:07:52.025 Host Write Commands: 38948 00:07:52.025 Controller Busy Time: 0 minutes 00:07:52.025 Power Cycles: 0 00:07:52.025 Power On Hours: 0 hours 00:07:52.025 Unsafe Shutdowns: 0 00:07:52.025 Unrecoverable Media Errors: 0 00:07:52.025 Lifetime Error Log Entries: 0 00:07:52.025 Warning Temperature Time: 0 minutes 00:07:52.025 Critical Temperature Time: 0 minutes 00:07:52.025 00:07:52.025 Number of Queues 00:07:52.025 ================ 00:07:52.025 Number of I/O Submission Queues: 64 00:07:52.025 Number of I/O Completion Queues: 64 00:07:52.025 00:07:52.025 ZNS Specific Controller Data 00:07:52.025 ============================ 00:07:52.025 Zone Append Size Limit: 0 00:07:52.025 00:07:52.025 00:07:52.025 Active Namespaces 00:07:52.025 ================= 00:07:52.025 Namespace ID:1 00:07:52.025 Error Recovery Timeout: Unlimited 00:07:52.025 Command Set Identifier: NVM (00h) 00:07:52.025 Deallocate: Supported 00:07:52.025 Deallocated/Unwritten Error: Supported 00:07:52.025 Deallocated Read Value: All 0x00 00:07:52.025 Deallocate in Write Zeroes: Not Supported 00:07:52.025 Deallocated Guard Field: 0xFFFF 00:07:52.025 Flush: Supported 00:07:52.025 Reservation: Not Supported 00:07:52.025 Metadata Transferred as: Separate Metadata Buffer 00:07:52.025 Namespace Sharing Capabilities: Private 00:07:52.025 Size (in LBAs): 1548666 (5GiB) 00:07:52.025 Capacity (in LBAs): 1548666 (5GiB) 00:07:52.025 Utilization (in LBAs): 1548666 (5GiB) 00:07:52.025 Thin Provisioning: Not Supported 00:07:52.025 Per-NS Atomic Units: No 00:07:52.025 Maximum Single Source Range Length: 128 00:07:52.025 Maximum Copy Length: 128 00:07:52.025 Maximum Source Range Count: 128 00:07:52.025 NGUID/EUI64 Never Reused: No 00:07:52.025 Namespace Write Protected: No 00:07:52.025 Number of LBA Formats: 8 00:07:52.025 Current LBA Format: LBA Format #07 00:07:52.025 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.025 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.025 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.025 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.025 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.025 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.025 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.025 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.025 00:07:52.025 NVM Specific Namespace Data 00:07:52.025 =========================== 00:07:52.025 Logical Block Storage Tag Mask: 0 00:07:52.025 Protection Information Capabilities: 00:07:52.025 16b Guard Protection Information Storage Tag Support: No 00:07:52.025 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.025 Storage Tag Check Read Support: No 00:07:52.025 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.025 ===================================================== 00:07:52.025 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.025 ===================================================== 00:07:52.025 Controller Capabilities/Features 00:07:52.025 ================================ 00:07:52.025 Vendor ID: 1b36 00:07:52.025 Subsystem Vendor ID: 1af4 00:07:52.025 Serial Number: 12341 00:07:52.025 Model Number: QEMU NVMe Ctrl 00:07:52.025 Firmware Version: 8.0.0 00:07:52.025 Recommended Arb Burst: 6 00:07:52.025 IEEE OUI Identifier: 00 54 52 00:07:52.025 Multi-path I/O 00:07:52.025 May have multiple subsystem ports: No 00:07:52.025 May have multiple controllers: No 00:07:52.025 Associated with SR-IOV VF: No 00:07:52.025 Max Data Transfer Size: 524288 00:07:52.025 Max Number of Namespaces: 256 00:07:52.025 Max Number of I/O Queues: 64 00:07:52.025 NVMe Specification Version (VS): 1.4 00:07:52.025 NVMe Specification Version (Identify): 1.4 00:07:52.025 Maximum Queue Entries: 2048 00:07:52.025 Contiguous Queues Required: Yes 00:07:52.025 Arbitration Mechanisms Supported 00:07:52.025 Weighted Round Robin: Not Supported 00:07:52.025 Vendor Specific: Not Supported 00:07:52.025 Reset Timeout: 7500 ms 00:07:52.025 Doorbell Stride: 4 bytes 00:07:52.025 NVM Subsystem Reset: Not Supported 00:07:52.025 Command Sets Supported 00:07:52.025 NVM Command Set: Supported 00:07:52.025 Boot Partition: Not Supported 00:07:52.025 Memory Page Size Minimum: 4096 bytes 00:07:52.025 Memory Page Size Maximum: 65536 bytes 00:07:52.025 Persistent Memory Region: Not Supported 00:07:52.025 Optional Asynchronous Events Supported 00:07:52.025 Namespace Attribute Notices: Supported 00:07:52.025 Firmware Activation Notices: Not Supported 00:07:52.025 ANA Change Notices: Not Supported 00:07:52.025 PLE Aggregate Log Change Notices: Not Supported 00:07:52.025 LBA Status Info Alert Notices: Not Supported 00:07:52.025 EGE Aggregate Log Change Notices: Not Supported 00:07:52.025 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.025 Zone Descriptor Change Notices: Not Supported 00:07:52.025 Discovery Log Change Notices: Not Supported 00:07:52.025 Controller Attributes 00:07:52.025 128-bit Host Identifier: Not Supported 00:07:52.025 Non-Operational Permissive Mode: Not Supported 00:07:52.025 NVM Sets: Not Supported 00:07:52.025 Read Recovery Levels: Not Supported 00:07:52.025 Endurance Groups: Not Supported 00:07:52.025 Predictable Latency Mode: Not Supported 00:07:52.025 Traffic Based Keep ALive: Not Supported 00:07:52.025 Namespace Granularity: Not Supported 00:07:52.025 SQ Associations: Not Supported 00:07:52.025 UUID List: Not Supported 00:07:52.025 Multi-Domain Subsystem: Not Supported 00:07:52.025 Fixed Capacity Management: Not Supported 00:07:52.025 Variable Capacity Management: Not Supported 00:07:52.025 Delete Endurance Group: Not Supported 00:07:52.025 Delete NVM Set: Not Supported 00:07:52.025 Extended LBA Formats Supported: Supported 00:07:52.025 Flexible Data Placement Supported: Not Supported 00:07:52.025 00:07:52.025 Controller Memory Buffer Support 00:07:52.025 ================================ 00:07:52.025 Supported: No 00:07:52.025 00:07:52.025 Persistent Memory Region Support 00:07:52.025 ================================ 00:07:52.025 Supported: No 00:07:52.025 00:07:52.025 Admin Command Set Attributes 00:07:52.026 ============================ 00:07:52.026 Security Send/Receive: Not Supported 00:07:52.026 Format NVM: Supported 00:07:52.026 Firmware Activate/Download: Not Supported 00:07:52.026 Namespace Management: Supported 00:07:52.026 Device Self-Test: Not Supported 00:07:52.026 Directives: Supported 00:07:52.026 NVMe-MI: Not Supported 00:07:52.026 Virtualization Management: Not Supported 00:07:52.026 Doorbell Buffer Config: Supported 00:07:52.026 Get LBA Status Capability: Not Supported 00:07:52.026 Command & Feature Lockdown Capability: Not Supported 00:07:52.026 Abort Command Limit: 4 00:07:52.026 Async Event Request Limit: 4 00:07:52.026 Number of Firmware Slots: N/A 00:07:52.026 Firmware Slot 1 Read-Only: N/A 00:07:52.026 Firmware Activation Without Reset: N/A 00:07:52.026 Multiple Update Detection Support: N/A 00:07:52.026 Firmware Update Granularity: No Information Provided 00:07:52.026 Per-Namespace SMART Log: Yes 00:07:52.026 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.026 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:52.026 Command Effects Log Page: Supported 00:07:52.026 Get Log Page Extended Data: Supported 00:07:52.026 Telemetry Log Pages: Not Supported 00:07:52.026 Persistent Event Log Pages: Not Supported 00:07:52.026 Supported Log Pages Log Page: May Support 00:07:52.026 Commands Supported & Effects Log Page: Not Supported 00:07:52.026 Feature Identifiers & Effects Log Page:May Support 00:07:52.026 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.026 Data Area 4 for Telemetry Log: Not Supported 00:07:52.026 Error Log Page Entries Supported: 1 00:07:52.026 Keep Alive: Not Supported 00:07:52.026 00:07:52.026 NVM Command Set Attributes 00:07:52.026 ========================== 00:07:52.026 Submission Queue Entry Size 00:07:52.026 Max: 64 00:07:52.026 Min: 64 00:07:52.026 Completion Queue Entry Size 00:07:52.026 Max: 16 00:07:52.026 Min: 16 00:07:52.026 Number of Namespaces: 256 00:07:52.026 Compare Command: Supported 00:07:52.026 Write Uncorrectable Command: Not Supported 00:07:52.026 Dataset Management Command: Supported 00:07:52.026 Write Zeroes Command: Supported 00:07:52.026 Set Features Save Field: Supported 00:07:52.026 Reservations: Not Supported 00:07:52.026 Timestamp: Supported 00:07:52.026 Copy: Supported 00:07:52.026 Volatile Write Cache: Present 00:07:52.026 Atomic Write Unit (Normal): 1 00:07:52.026 Atomic Write Unit (PFail): 1 00:07:52.026 Atomic Compare & Write Unit: 1 00:07:52.026 Fused Compare & Write: Not Supported 00:07:52.026 Scatter-Gather List 00:07:52.026 SGL Command Set: Supported 00:07:52.026 SGL Keyed: Not Supported 00:07:52.026 SGL Bit Bucket Descriptor: Not Supported 00:07:52.026 SGL Metadata Pointer: Not Supported 00:07:52.026 Oversized SGL: Not Supported 00:07:52.026 SGL Metadata Address: Not Supported 00:07:52.026 SGL Offset: Not Supported 00:07:52.026 Transport SGL Data Block: Not Supported 00:07:52.026 Replay Protected Memory Block: Not Supported 00:07:52.026 00:07:52.026 Firmware Slot Information 00:07:52.026 ========================= 00:07:52.026 Active slot: 1 00:07:52.026 Slot 1 Firmware Revision: 1.0 00:07:52.026 00:07:52.026 00:07:52.026 Commands Supported and Effects 00:07:52.026 ============================== 00:07:52.026 Admin Commands 00:07:52.026 -------------- 00:07:52.026 Delete I/O Submission Queue (00h): Supported 00:07:52.026 Create I/O Submission Queue (01h): Supported 00:07:52.026 Get Log Page (02h): Supported 00:07:52.026 Delete I/O Completion Queue (04h): Supported 00:07:52.026 Create I/O Completion Queue (05h): Supported 00:07:52.026 Identify (06h): Supported 00:07:52.026 Abort (08h): Supported 00:07:52.026 Set Features (09h): Supported 00:07:52.026 Get Features (0Ah): Supported 00:07:52.026 Asynchronous Event Request (0Ch): Supported 00:07:52.026 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.026 Directive Send (19h): Supported 00:07:52.026 Directive Receive (1Ah): Supported 00:07:52.026 Virtualization Management (1Ch): Supported 00:07:52.026 Doorbell Buffer Config (7Ch): Supported 00:07:52.026 Format NVM (80h): Supported LBA-Change 00:07:52.026 I/O Commands 00:07:52.026 ------------ 00:07:52.026 Flush (00h): Supported LBA-Change 00:07:52.026 Write (01h): Supported LBA-Change 00:07:52.026 Read (02h): Supported 00:07:52.026 Compare (05h): Supported 00:07:52.026 Write Zeroes (08h): Supported LBA-Change 00:07:52.026 Dataset Management (09h): Supported LBA-Change 00:07:52.026 Unknown (0Ch): Supported 00:07:52.026 Unknown (12h): Supported 00:07:52.026 Copy (19h): Supported LBA-Change 00:07:52.026 Unknown (1Dh): Supported LBA-Change 00:07:52.026 00:07:52.026 Error Log 00:07:52.026 ========= 00:07:52.026 00:07:52.026 Arbitration 00:07:52.026 =========== 00:07:52.026 Arbitration Burst: no limit 00:07:52.026 00:07:52.026 Power Management 00:07:52.026 ================ 00:07:52.026 Number of Power States: 1 00:07:52.026 Current Power State: Power State #0 00:07:52.026 Power State #0: 00:07:52.026 Max Power: 25.00 W 00:07:52.026 Non-Operational State: Operational 00:07:52.026 Entry Latency: 16 microseconds 00:07:52.026 Exit Latency: 4 microseconds 00:07:52.026 Relative Read Throughput: 0 00:07:52.026 Relative Read Latency: 0 00:07:52.026 Relative Write Throughput: 0 00:07:52.026 Relative Write Latency: 0 00:07:52.026 Idle Power: Not Reported 00:07:52.026 Active Power: Not Reported 00:07:52.026 Non-Operational Permissive Mode: Not Supported 00:07:52.026 00:07:52.026 Health Information 00:07:52.026 ================== 00:07:52.026 Critical Warnings: 00:07:52.026 Available Spare Space: OK 00:07:52.026 Temperature: OK 00:07:52.026 Device Reliability: OK 00:07:52.026 Read Only: No 00:07:52.026 Volatile Memory Backup: OK 00:07:52.026 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.026 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.026 Available Spare: 0% 00:07:52.026 Available Spare Threshold: 0% 00:07:52.026 Life Percentage Used: 0% 00:07:52.026 Data Units Read: 1109 00:07:52.026 Data Units Written: 983 00:07:52.026 Host Read Commands: 58712 00:07:52.026 Host Write Commands: 57608 00:07:52.026 Controller Busy Time: 0 minutes 00:07:52.026 Power Cycles: 0 00:07:52.026 Power On Hours: 0 hours 00:07:52.026 Unsafe Shutdowns: 0 00:07:52.026 Unrecoverable Media Errors: 0 00:07:52.026 Lifetime Error Log Entries: 0 00:07:52.026 Warning Temperature Time: 0 minutes 00:07:52.026 Critical Temperature Time: 0 minutes 00:07:52.026 00:07:52.026 Number of Queues 00:07:52.026 ================ 00:07:52.026 Number of I/O Submission Queues: 64 00:07:52.026 Number of I/O Completion Queues: 64 00:07:52.026 00:07:52.026 ZNS Specific Controller Data 00:07:52.026 ============================ 00:07:52.026 Zone Append Size Limit: 0 00:07:52.026 00:07:52.026 00:07:52.026 Active Namespaces 00:07:52.026 ================= 00:07:52.026 Namespace ID:1 00:07:52.026 Error Recovery Timeout: Unlimited 00:07:52.026 Command Set Identifier: NVM (00h) 00:07:52.026 Deallocate: Supported 00:07:52.026 Deallocated/Unwritten Error: Supported 00:07:52.026 Deallocated Read Value: All 0x00 00:07:52.026 Deallocate in Write Zeroes: Not Supported 00:07:52.026 Deallocated Guard Field: 0xFFFF 00:07:52.026 Flush: Supported 00:07:52.026 Reservation: Not Supported 00:07:52.026 Namespace Sharing Capabilities: Private 00:07:52.026 Size (in LBAs): 1310720 (5GiB) 00:07:52.026 Capacity (in LBAs): 1310720 (5GiB) 00:07:52.026 Utilization (in LBAs): 1310720 (5GiB) 00:07:52.026 Thin Provisioning: Not Supported 00:07:52.026 Per-NS Atomic Units: No 00:07:52.026 Maximum Single Source Range Length: 128 00:07:52.026 Maximum Copy Length: 128 00:07:52.026 Maximum Source Range Count: 128 00:07:52.026 NGUID/EUI64 Never Reused: No 00:07:52.026 Namespace Write Protected: No 00:07:52.026 Number of LBA Formats: 8 00:07:52.026 Current LBA Format: LBA Format #04 00:07:52.026 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.026 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.026 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.026 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.026 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.026 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.026 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.026 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.026 00:07:52.026 NVM Specific Namespace Data 00:07:52.026 =========================== 00:07:52.026 Logical Block Storage Tag Mask: 0 00:07:52.026 Protection Information Capabilities: 00:07:52.026 16b Guard Protection Information Storage Tag Support: No 00:07:52.026 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.026 Storage Tag Check Read Support: No 00:07:52.026 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.026 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.026 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.026 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.026 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.026 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.027 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.027 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.027 ===================================================== 00:07:52.027 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.027 ===================================================== 00:07:52.027 Controller Capabilities/Features 00:07:52.027 ================================ 00:07:52.027 Vendor ID: 1b36 00:07:52.027 Subsystem Vendor ID: 1af4 00:07:52.027 Serial Number: 12343 00:07:52.027 Model Number: QEMU NVMe Ctrl 00:07:52.027 Firmware Version: 8.0.0 00:07:52.027 Recommended Arb Burst: 6 00:07:52.027 IEEE OUI Identifier: 00 54 52 00:07:52.027 Multi-path I/O 00:07:52.027 May have multiple subsystem ports: No 00:07:52.027 May have multiple controllers: Yes 00:07:52.027 Associated with SR-IOV VF: No 00:07:52.027 Max Data Transfer Size: 524288 00:07:52.027 Max Number of Namespaces: 256 00:07:52.027 Max Number of I/O Queues: 64 00:07:52.027 NVMe Specification Version (VS): 1.4 00:07:52.027 NVMe Specification Version (Identify): 1.4 00:07:52.027 Maximum Queue Entries: 2048 00:07:52.027 Contiguous Queues Required: Yes 00:07:52.027 Arbitration Mechanisms Supported 00:07:52.027 Weighted Round Robin: Not Supported 00:07:52.027 Vendor Specific: Not Supported 00:07:52.027 Reset Timeout: 7500 ms 00:07:52.027 Doorbell Stride: 4 bytes 00:07:52.027 NVM Subsystem Reset: Not Supported 00:07:52.027 Command Sets Supported 00:07:52.027 NVM Command Set: Supported 00:07:52.027 Boot Partition: Not Supported 00:07:52.027 Memory Page Size Minimum: 4096 bytes 00:07:52.027 Memory Page Size Maximum: 65536 bytes 00:07:52.027 Persistent Memory Region: Not Supported 00:07:52.027 Optional Asynchronous Events Supported 00:07:52.027 Namespace Attribute Notices: Supported 00:07:52.027 Firmware Activation Notices: Not Supported 00:07:52.027 ANA Change Notices: Not Supported 00:07:52.027 PLE Aggregate Log Change Notices: Not Supported 00:07:52.027 LBA Status Info Alert Notices: Not Supported 00:07:52.027 EGE Aggregate Log Change Notices: Not Supported 00:07:52.027 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.027 Zone Descriptor Change Notices: Not Supported 00:07:52.027 Discovery Log Change Notices: Not Supported 00:07:52.027 Controller Attributes 00:07:52.027 128-bit Host Identifier: Not Supported 00:07:52.027 Non-Operational Permissive Mode: Not Supported 00:07:52.027 NVM Sets: Not Supported 00:07:52.027 Read Recovery Levels: Not Supported 00:07:52.027 Endurance Groups: Supported 00:07:52.027 Predictable Latency Mode: Not Supported 00:07:52.027 Traffic Based Keep ALive: Not Supported 00:07:52.027 Namespace Granularity: Not Supported 00:07:52.027 SQ Associations: Not Supported 00:07:52.027 UUID List: Not Supported 00:07:52.027 Multi-Domain Subsystem: Not Supported 00:07:52.027 Fixed Capacity Management: Not Supported 00:07:52.027 Variable Capacity Management: Not Supported 00:07:52.027 Delete Endurance Group: Not Supported 00:07:52.027 Delete NVM Set: Not Supported 00:07:52.027 Extended LBA Formats Supported: Supported 00:07:52.027 Flexible Data Placement Supported: Supported 00:07:52.027 00:07:52.027 Controller Memory Buffer Support 00:07:52.027 ================================ 00:07:52.027 Supported: No 00:07:52.027 00:07:52.027 Persistent Memory Region Support 00:07:52.027 ================================ 00:07:52.027 Supported: No 00:07:52.027 00:07:52.027 Admin Command Set Attributes 00:07:52.027 ============================ 00:07:52.027 Security Send/Receive: Not Supported 00:07:52.027 Format NVM: Supported 00:07:52.027 Firmware Activate/Download: Not Supported 00:07:52.027 Namespace Management: Supported 00:07:52.027 Device Self-Test: Not Supported 00:07:52.027 Directives: Supported 00:07:52.027 NVMe-MI: Not Supported 00:07:52.027 Virtualization Management: Not Supported 00:07:52.027 Doorbell Buffer Config: Supported 00:07:52.027 Get LBA Status Capability: Not Supported 00:07:52.027 Command & Feature Lockdown Capability: Not Supported 00:07:52.027 Abort Command Limit: 4 00:07:52.027 Async Event Request Limit: 4 00:07:52.027 Number of Firmware Slots: N/A 00:07:52.027 Firmware Slot 1 Read-Only: N/A 00:07:52.027 Firmware Activation Without Reset: N/A 00:07:52.027 Multiple Update Detection Support: N/A 00:07:52.027 Firmware Update Granularity: No Information Provided 00:07:52.027 Per-Namespace SMART Log: Yes 00:07:52.027 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.027 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:52.027 Command Effects Log Page: Supported 00:07:52.027 Get Log Page Extended Data: Supported 00:07:52.027 Telemetry Log Pages: Not Supported 00:07:52.027 Persistent Event Log Pages: Not Supported 00:07:52.027 Supported Log Pages Log Page: May Support 00:07:52.027 Commands Supported & Effects Log Page: Not Supported 00:07:52.027 Feature Identifiers & Effects Log Page:May Support 00:07:52.027 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.027 Data Area 4 for Telemetry Log: Not Supported 00:07:52.027 Error Log Page Entries Supported: 1 00:07:52.027 Keep Alive: Not Supported 00:07:52.027 00:07:52.027 NVM Command Set Attributes 00:07:52.027 ========================== 00:07:52.027 Submission Queue Entry Size 00:07:52.027 Max: 64 00:07:52.027 Min: 64 00:07:52.027 Completion Queue Entry Size 00:07:52.027 Max: 16 00:07:52.027 Min: 16 00:07:52.027 Number of Namespaces: 256 00:07:52.027 Compare Command: Supported 00:07:52.027 Write Uncorrectable Command: Not Supported 00:07:52.027 Dataset Management Command: Supported 00:07:52.027 Write Zeroes Command: Supported 00:07:52.027 Set Features Save Field: Supported 00:07:52.027 Reservations: Not Supported 00:07:52.027 Timestamp: Supported 00:07:52.027 Copy: Supported 00:07:52.027 Volatile Write Cache: Present 00:07:52.027 Atomic Write Unit (Normal): 1 00:07:52.027 Atomic Write Unit (PFail): 1 00:07:52.027 Atomic Compare & Write Unit: 1 00:07:52.027 Fused Compare & Write: Not Supported 00:07:52.027 Scatter-Gather List 00:07:52.027 SGL Command Set: Supported 00:07:52.027 SGL Keyed: Not Supported 00:07:52.027 SGL Bit Bucket Descriptor: Not Supported 00:07:52.027 SGL Metadata Pointer: Not Supported 00:07:52.027 Oversized SGL: Not Supported 00:07:52.027 SGL Metadata Address: Not Supported 00:07:52.027 SGL Offset: Not Supported 00:07:52.027 Transport SGL Data Block: Not Supported 00:07:52.027 Replay Protected Memory Block: Not Supported 00:07:52.027 00:07:52.027 Firmware Slot Information 00:07:52.027 ========================= 00:07:52.027 Active slot: 1 00:07:52.027 Slot 1 Firmware Revision: 1.0 00:07:52.027 00:07:52.027 00:07:52.027 Commands Supported and Effects 00:07:52.027 ============================== 00:07:52.027 Admin Commands 00:07:52.027 -------------- 00:07:52.027 Delete I/O Submission Queue (00h): Supported 00:07:52.027 Create I/O Submission Queue (01h): Supported 00:07:52.027 Get Log Page (02h): Supported 00:07:52.027 Delete I/O Completion Queue (04h): Supported 00:07:52.027 Create I/O Completion Queue (05h): Supported 00:07:52.027 Identify (06h): Supported 00:07:52.027 Abort (08h): Supported 00:07:52.027 Set Features (09h): Supported 00:07:52.027 Get Features (0Ah): Supported 00:07:52.027 Asynchronous Event Request (0Ch): Supported 00:07:52.027 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.027 Directive Send (19h): Supported 00:07:52.027 Directive Receive (1Ah): Supported 00:07:52.027 Virtualization Management (1Ch): Supported 00:07:52.027 Doorbell Buffer Config (7Ch): Supported 00:07:52.027 Format NVM (80h): Supported LBA-Change 00:07:52.027 I/O Commands 00:07:52.027 ------------ 00:07:52.027 Flush (00h): Supported LBA-Change 00:07:52.027 Write (01h): Supported LBA-Change 00:07:52.027 Read (02h): Supported 00:07:52.027 Compare (05h): Supported 00:07:52.027 Write Zeroes (08h): Supported LBA-Change 00:07:52.027 Dataset Management (09h): Supported LBA-Change 00:07:52.027 Unknown (0Ch): Supported 00:07:52.027 Unknown (12h): Supported 00:07:52.027 Copy (19h): Supported LBA-Change 00:07:52.027 Unknown (1Dh): Supported LBA-Change 00:07:52.027 00:07:52.027 Error Log 00:07:52.027 ========= 00:07:52.027 00:07:52.027 Arbitration 00:07:52.027 =========== 00:07:52.027 Arbitration Burst: no limit 00:07:52.027 00:07:52.027 Power Management 00:07:52.027 ================ 00:07:52.027 Number of Power States: 1 00:07:52.027 Current Power State: Power State #0 00:07:52.027 Power State #0: 00:07:52.027 Max Power: 25.00 W 00:07:52.027 Non-Operational State: Operational 00:07:52.027 Entry Latency: 16 microseconds 00:07:52.027 Exit Latency: 4 microseconds 00:07:52.027 Relative Read Throughput: 0 00:07:52.027 Relative Read Latency: 0 00:07:52.027 Relative Write Throughput: 0 00:07:52.027 Relative Write Latency: 0 00:07:52.027 Idle Power: Not Reported 00:07:52.027 Active Power: Not Reported 00:07:52.027 Non-Operational Permissive Mode: Not Supported 00:07:52.027 00:07:52.027 Health Information 00:07:52.027 ================== 00:07:52.027 Critical Warnings: 00:07:52.027 Available Spare Space: OK 00:07:52.028 Temperature: OK 00:07:52.028 Device Reliability: OK 00:07:52.028 Read Only: No 00:07:52.028 Volatile Memory Backup: OK 00:07:52.028 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.028 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.028 Available Spare: 0% 00:07:52.028 Available Spare Threshold: 0% 00:07:52.028 Life Percentage Used: [2024-12-13 18:02:26.229407] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76131 terminated unexpected 00:07:52.028 [2024-12-13 18:02:26.229818] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76131 terminated unexpected 00:07:52.028 0% 00:07:52.028 Data Units Read: 857 00:07:52.028 Data Units Written: 786 00:07:52.028 Host Read Commands: 40915 00:07:52.028 Host Write Commands: 40338 00:07:52.028 Controller Busy Time: 0 minutes 00:07:52.028 Power Cycles: 0 00:07:52.028 Power On Hours: 0 hours 00:07:52.028 Unsafe Shutdowns: 0 00:07:52.028 Unrecoverable Media Errors: 0 00:07:52.028 Lifetime Error Log Entries: 0 00:07:52.028 Warning Temperature Time: 0 minutes 00:07:52.028 Critical Temperature Time: 0 minutes 00:07:52.028 00:07:52.028 Number of Queues 00:07:52.028 ================ 00:07:52.028 Number of I/O Submission Queues: 64 00:07:52.028 Number of I/O Completion Queues: 64 00:07:52.028 00:07:52.028 ZNS Specific Controller Data 00:07:52.028 ============================ 00:07:52.028 Zone Append Size Limit: 0 00:07:52.028 00:07:52.028 00:07:52.028 Active Namespaces 00:07:52.028 ================= 00:07:52.028 Namespace ID:1 00:07:52.028 Error Recovery Timeout: Unlimited 00:07:52.028 Command Set Identifier: NVM (00h) 00:07:52.028 Deallocate: Supported 00:07:52.028 Deallocated/Unwritten Error: Supported 00:07:52.028 Deallocated Read Value: All 0x00 00:07:52.028 Deallocate in Write Zeroes: Not Supported 00:07:52.028 Deallocated Guard Field: 0xFFFF 00:07:52.028 Flush: Supported 00:07:52.028 Reservation: Not Supported 00:07:52.028 Namespace Sharing Capabilities: Multiple Controllers 00:07:52.028 Size (in LBAs): 262144 (1GiB) 00:07:52.028 Capacity (in LBAs): 262144 (1GiB) 00:07:52.028 Utilization (in LBAs): 262144 (1GiB) 00:07:52.028 Thin Provisioning: Not Supported 00:07:52.028 Per-NS Atomic Units: No 00:07:52.028 Maximum Single Source Range Length: 128 00:07:52.028 Maximum Copy Length: 128 00:07:52.028 Maximum Source Range Count: 128 00:07:52.028 NGUID/EUI64 Never Reused: No 00:07:52.028 Namespace Write Protected: No 00:07:52.028 Endurance group ID: 1 00:07:52.028 Number of LBA Formats: 8 00:07:52.028 Current LBA Format: LBA Format #04 00:07:52.028 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.028 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.028 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.028 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.028 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.028 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.028 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.028 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.028 00:07:52.028 Get Feature FDP: 00:07:52.028 ================ 00:07:52.028 Enabled: Yes 00:07:52.028 FDP configuration index: 0 00:07:52.028 00:07:52.028 FDP configurations log page 00:07:52.028 =========================== 00:07:52.028 Number of FDP configurations: 1 00:07:52.028 Version: 0 00:07:52.028 Size: 112 00:07:52.028 FDP Configuration Descriptor: 0 00:07:52.028 Descriptor Size: 96 00:07:52.028 Reclaim Group Identifier format: 2 00:07:52.028 FDP Volatile Write Cache: Not Present 00:07:52.028 FDP Configuration: Valid 00:07:52.028 Vendor Specific Size: 0 00:07:52.028 Number of Reclaim Groups: 2 00:07:52.028 Number of Recalim Unit Handles: 8 00:07:52.028 Max Placement Identifiers: 128 00:07:52.028 Number of Namespaces Suppprted: 256 00:07:52.028 Reclaim unit Nominal Size: 6000000 bytes 00:07:52.028 Estimated Reclaim Unit Time Limit: Not Reported 00:07:52.028 RUH Desc #000: RUH Type: Initially Isolated 00:07:52.028 RUH Desc #001: RUH Type: Initially Isolated 00:07:52.028 RUH Desc #002: RUH Type: Initially Isolated 00:07:52.028 RUH Desc #003: RUH Type: Initially Isolated 00:07:52.028 RUH Desc #004: RUH Type: Initially Isolated 00:07:52.028 RUH Desc #005: RUH Type: Initially Isolated 00:07:52.028 RUH Desc #006: RUH Type: Initially Isolated 00:07:52.028 RUH Desc #007: RUH Type: Initially Isolated 00:07:52.028 00:07:52.028 FDP reclaim unit handle usage log page 00:07:52.028 ====================================== 00:07:52.028 Number of Reclaim Unit Handles: 8 00:07:52.028 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:52.028 RUH Usage Desc #001: RUH Attributes: Unused 00:07:52.028 RUH Usage Desc #002: RUH Attributes: Unused 00:07:52.028 RUH Usage Desc #003: RUH Attributes: Unused 00:07:52.028 RUH Usage Desc #004: RUH Attributes: Unused 00:07:52.028 RUH Usage Desc #005: RUH Attributes: Unused 00:07:52.028 RUH Usage Desc #006: RUH Attributes: Unused 00:07:52.028 RUH Usage Desc #007: RUH Attributes: Unused 00:07:52.028 00:07:52.028 FDP statistics log page 00:07:52.028 ======================= 00:07:52.028 Host bytes with metadata written: 505389056 00:07:52.028 Medi[2024-12-13 18:02:26.231144] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76131 terminated unexpected 00:07:52.028 a bytes with metadata written: 505446400 00:07:52.028 Media bytes erased: 0 00:07:52.028 00:07:52.028 FDP events log page 00:07:52.028 =================== 00:07:52.028 Number of FDP events: 0 00:07:52.028 00:07:52.028 NVM Specific Namespace Data 00:07:52.028 =========================== 00:07:52.028 Logical Block Storage Tag Mask: 0 00:07:52.028 Protection Information Capabilities: 00:07:52.028 16b Guard Protection Information Storage Tag Support: No 00:07:52.028 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.028 Storage Tag Check Read Support: No 00:07:52.028 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.028 ===================================================== 00:07:52.028 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.028 ===================================================== 00:07:52.028 Controller Capabilities/Features 00:07:52.028 ================================ 00:07:52.028 Vendor ID: 1b36 00:07:52.028 Subsystem Vendor ID: 1af4 00:07:52.028 Serial Number: 12342 00:07:52.028 Model Number: QEMU NVMe Ctrl 00:07:52.028 Firmware Version: 8.0.0 00:07:52.028 Recommended Arb Burst: 6 00:07:52.028 IEEE OUI Identifier: 00 54 52 00:07:52.028 Multi-path I/O 00:07:52.028 May have multiple subsystem ports: No 00:07:52.028 May have multiple controllers: No 00:07:52.028 Associated with SR-IOV VF: No 00:07:52.028 Max Data Transfer Size: 524288 00:07:52.028 Max Number of Namespaces: 256 00:07:52.028 Max Number of I/O Queues: 64 00:07:52.028 NVMe Specification Version (VS): 1.4 00:07:52.028 NVMe Specification Version (Identify): 1.4 00:07:52.028 Maximum Queue Entries: 2048 00:07:52.028 Contiguous Queues Required: Yes 00:07:52.028 Arbitration Mechanisms Supported 00:07:52.028 Weighted Round Robin: Not Supported 00:07:52.028 Vendor Specific: Not Supported 00:07:52.028 Reset Timeout: 7500 ms 00:07:52.028 Doorbell Stride: 4 bytes 00:07:52.028 NVM Subsystem Reset: Not Supported 00:07:52.028 Command Sets Supported 00:07:52.028 NVM Command Set: Supported 00:07:52.028 Boot Partition: Not Supported 00:07:52.028 Memory Page Size Minimum: 4096 bytes 00:07:52.028 Memory Page Size Maximum: 65536 bytes 00:07:52.029 Persistent Memory Region: Not Supported 00:07:52.029 Optional Asynchronous Events Supported 00:07:52.029 Namespace Attribute Notices: Supported 00:07:52.029 Firmware Activation Notices: Not Supported 00:07:52.029 ANA Change Notices: Not Supported 00:07:52.029 PLE Aggregate Log Change Notices: Not Supported 00:07:52.029 LBA Status Info Alert Notices: Not Supported 00:07:52.029 EGE Aggregate Log Change Notices: Not Supported 00:07:52.029 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.029 Zone Descriptor Change Notices: Not Supported 00:07:52.029 Discovery Log Change Notices: Not Supported 00:07:52.029 Controller Attributes 00:07:52.029 128-bit Host Identifier: Not Supported 00:07:52.029 Non-Operational Permissive Mode: Not Supported 00:07:52.029 NVM Sets: Not Supported 00:07:52.029 Read Recovery Levels: Not Supported 00:07:52.029 Endurance Groups: Not Supported 00:07:52.029 Predictable Latency Mode: Not Supported 00:07:52.029 Traffic Based Keep ALive: Not Supported 00:07:52.029 Namespace Granularity: Not Supported 00:07:52.029 SQ Associations: Not Supported 00:07:52.029 UUID List: Not Supported 00:07:52.029 Multi-Domain Subsystem: Not Supported 00:07:52.029 Fixed Capacity Management: Not Supported 00:07:52.029 Variable Capacity Management: Not Supported 00:07:52.029 Delete Endurance Group: Not Supported 00:07:52.029 Delete NVM Set: Not Supported 00:07:52.029 Extended LBA Formats Supported: Supported 00:07:52.029 Flexible Data Placement Supported: Not Supported 00:07:52.029 00:07:52.029 Controller Memory Buffer Support 00:07:52.029 ================================ 00:07:52.029 Supported: No 00:07:52.029 00:07:52.029 Persistent Memory Region Support 00:07:52.029 ================================ 00:07:52.029 Supported: No 00:07:52.029 00:07:52.029 Admin Command Set Attributes 00:07:52.029 ============================ 00:07:52.029 Security Send/Receive: Not Supported 00:07:52.029 Format NVM: Supported 00:07:52.029 Firmware Activate/Download: Not Supported 00:07:52.029 Namespace Management: Supported 00:07:52.029 Device Self-Test: Not Supported 00:07:52.029 Directives: Supported 00:07:52.029 NVMe-MI: Not Supported 00:07:52.029 Virtualization Management: Not Supported 00:07:52.029 Doorbell Buffer Config: Supported 00:07:52.029 Get LBA Status Capability: Not Supported 00:07:52.029 Command & Feature Lockdown Capability: Not Supported 00:07:52.029 Abort Command Limit: 4 00:07:52.029 Async Event Request Limit: 4 00:07:52.029 Number of Firmware Slots: N/A 00:07:52.029 Firmware Slot 1 Read-Only: N/A 00:07:52.029 Firmware Activation Without Reset: N/A 00:07:52.029 Multiple Update Detection Support: N/A 00:07:52.029 Firmware Update Granularity: No Information Provided 00:07:52.029 Per-Namespace SMART Log: Yes 00:07:52.029 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.029 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:52.029 Command Effects Log Page: Supported 00:07:52.029 Get Log Page Extended Data: Supported 00:07:52.029 Telemetry Log Pages: Not Supported 00:07:52.029 Persistent Event Log Pages: Not Supported 00:07:52.029 Supported Log Pages Log Page: May Support 00:07:52.029 Commands Supported & Effects Log Page: Not Supported 00:07:52.029 Feature Identifiers & Effects Log Page:May Support 00:07:52.029 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.029 Data Area 4 for Telemetry Log: Not Supported 00:07:52.029 Error Log Page Entries Supported: 1 00:07:52.029 Keep Alive: Not Supported 00:07:52.029 00:07:52.029 NVM Command Set Attributes 00:07:52.029 ========================== 00:07:52.029 Submission Queue Entry Size 00:07:52.029 Max: 64 00:07:52.029 Min: 64 00:07:52.029 Completion Queue Entry Size 00:07:52.029 Max: 16 00:07:52.029 Min: 16 00:07:52.029 Number of Namespaces: 256 00:07:52.029 Compare Command: Supported 00:07:52.029 Write Uncorrectable Command: Not Supported 00:07:52.029 Dataset Management Command: Supported 00:07:52.029 Write Zeroes Command: Supported 00:07:52.029 Set Features Save Field: Supported 00:07:52.029 Reservations: Not Supported 00:07:52.029 Timestamp: Supported 00:07:52.029 Copy: Supported 00:07:52.029 Volatile Write Cache: Present 00:07:52.029 Atomic Write Unit (Normal): 1 00:07:52.029 Atomic Write Unit (PFail): 1 00:07:52.029 Atomic Compare & Write Unit: 1 00:07:52.029 Fused Compare & Write: Not Supported 00:07:52.029 Scatter-Gather List 00:07:52.029 SGL Command Set: Supported 00:07:52.029 SGL Keyed: Not Supported 00:07:52.029 SGL Bit Bucket Descriptor: Not Supported 00:07:52.029 SGL Metadata Pointer: Not Supported 00:07:52.029 Oversized SGL: Not Supported 00:07:52.029 SGL Metadata Address: Not Supported 00:07:52.029 SGL Offset: Not Supported 00:07:52.029 Transport SGL Data Block: Not Supported 00:07:52.029 Replay Protected Memory Block: Not Supported 00:07:52.029 00:07:52.029 Firmware Slot Information 00:07:52.029 ========================= 00:07:52.029 Active slot: 1 00:07:52.029 Slot 1 Firmware Revision: 1.0 00:07:52.029 00:07:52.029 00:07:52.029 Commands Supported and Effects 00:07:52.029 ============================== 00:07:52.029 Admin Commands 00:07:52.029 -------------- 00:07:52.029 Delete I/O Submission Queue (00h): Supported 00:07:52.029 Create I/O Submission Queue (01h): Supported 00:07:52.029 Get Log Page (02h): Supported 00:07:52.029 Delete I/O Completion Queue (04h): Supported 00:07:52.029 Create I/O Completion Queue (05h): Supported 00:07:52.029 Identify (06h): Supported 00:07:52.029 Abort (08h): Supported 00:07:52.029 Set Features (09h): Supported 00:07:52.029 Get Features (0Ah): Supported 00:07:52.029 Asynchronous Event Request (0Ch): Supported 00:07:52.029 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.029 Directive Send (19h): Supported 00:07:52.029 Directive Receive (1Ah): Supported 00:07:52.029 Virtualization Management (1Ch): Supported 00:07:52.029 Doorbell Buffer Config (7Ch): Supported 00:07:52.029 Format NVM (80h): Supported LBA-Change 00:07:52.029 I/O Commands 00:07:52.029 ------------ 00:07:52.029 Flush (00h): Supported LBA-Change 00:07:52.029 Write (01h): Supported LBA-Change 00:07:52.029 Read (02h): Supported 00:07:52.029 Compare (05h): Supported 00:07:52.029 Write Zeroes (08h): Supported LBA-Change 00:07:52.029 Dataset Management (09h): Supported LBA-Change 00:07:52.029 Unknown (0Ch): Supported 00:07:52.029 Unknown (12h): Supported 00:07:52.029 Copy (19h): Supported LBA-Change 00:07:52.029 Unknown (1Dh): Supported LBA-Change 00:07:52.029 00:07:52.029 Error Log 00:07:52.029 ========= 00:07:52.029 00:07:52.029 Arbitration 00:07:52.029 =========== 00:07:52.029 Arbitration Burst: no limit 00:07:52.029 00:07:52.029 Power Management 00:07:52.029 ================ 00:07:52.029 Number of Power States: 1 00:07:52.029 Current Power State: Power State #0 00:07:52.029 Power State #0: 00:07:52.029 Max Power: 25.00 W 00:07:52.029 Non-Operational State: Operational 00:07:52.029 Entry Latency: 16 microseconds 00:07:52.029 Exit Latency: 4 microseconds 00:07:52.029 Relative Read Throughput: 0 00:07:52.029 Relative Read Latency: 0 00:07:52.029 Relative Write Throughput: 0 00:07:52.029 Relative Write Latency: 0 00:07:52.029 Idle Power: Not Reported 00:07:52.029 Active Power: Not Reported 00:07:52.029 Non-Operational Permissive Mode: Not Supported 00:07:52.029 00:07:52.029 Health Information 00:07:52.029 ================== 00:07:52.029 Critical Warnings: 00:07:52.029 Available Spare Space: OK 00:07:52.029 Temperature: OK 00:07:52.029 Device Reliability: OK 00:07:52.029 Read Only: No 00:07:52.029 Volatile Memory Backup: OK 00:07:52.029 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.029 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.029 Available Spare: 0% 00:07:52.029 Available Spare Threshold: 0% 00:07:52.029 Life Percentage Used: 0% 00:07:52.029 Data Units Read: 2307 00:07:52.029 Data Units Written: 2094 00:07:52.029 Host Read Commands: 120385 00:07:52.029 Host Write Commands: 118654 00:07:52.029 Controller Busy Time: 0 minutes 00:07:52.029 Power Cycles: 0 00:07:52.029 Power On Hours: 0 hours 00:07:52.029 Unsafe Shutdowns: 0 00:07:52.029 Unrecoverable Media Errors: 0 00:07:52.029 Lifetime Error Log Entries: 0 00:07:52.029 Warning Temperature Time: 0 minutes 00:07:52.029 Critical Temperature Time: 0 minutes 00:07:52.029 00:07:52.029 Number of Queues 00:07:52.029 ================ 00:07:52.029 Number of I/O Submission Queues: 64 00:07:52.029 Number of I/O Completion Queues: 64 00:07:52.029 00:07:52.029 ZNS Specific Controller Data 00:07:52.029 ============================ 00:07:52.029 Zone Append Size Limit: 0 00:07:52.029 00:07:52.029 00:07:52.029 Active Namespaces 00:07:52.029 ================= 00:07:52.029 Namespace ID:1 00:07:52.029 Error Recovery Timeout: Unlimited 00:07:52.029 Command Set Identifier: NVM (00h) 00:07:52.029 Deallocate: Supported 00:07:52.029 Deallocated/Unwritten Error: Supported 00:07:52.029 Deallocated Read Value: All 0x00 00:07:52.029 Deallocate in Write Zeroes: Not Supported 00:07:52.029 Deallocated Guard Field: 0xFFFF 00:07:52.029 Flush: Supported 00:07:52.030 Reservation: Not Supported 00:07:52.030 Namespace Sharing Capabilities: Private 00:07:52.030 Size (in LBAs): 1048576 (4GiB) 00:07:52.030 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.030 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.030 Thin Provisioning: Not Supported 00:07:52.030 Per-NS Atomic Units: No 00:07:52.030 Maximum Single Source Range Length: 128 00:07:52.030 Maximum Copy Length: 128 00:07:52.030 Maximum Source Range Count: 128 00:07:52.030 NGUID/EUI64 Never Reused: No 00:07:52.030 Namespace Write Protected: No 00:07:52.030 Number of LBA Formats: 8 00:07:52.030 Current LBA Format: LBA Format #04 00:07:52.030 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.030 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.030 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.030 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.030 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.030 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.030 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.030 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.030 00:07:52.030 NVM Specific Namespace Data 00:07:52.030 =========================== 00:07:52.030 Logical Block Storage Tag Mask: 0 00:07:52.030 Protection Information Capabilities: 00:07:52.030 16b Guard Protection Information Storage Tag Support: No 00:07:52.030 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.030 Storage Tag Check Read Support: No 00:07:52.030 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Namespace ID:2 00:07:52.030 Error Recovery Timeout: Unlimited 00:07:52.030 Command Set Identifier: NVM (00h) 00:07:52.030 Deallocate: Supported 00:07:52.030 Deallocated/Unwritten Error: Supported 00:07:52.030 Deallocated Read Value: All 0x00 00:07:52.030 Deallocate in Write Zeroes: Not Supported 00:07:52.030 Deallocated Guard Field: 0xFFFF 00:07:52.030 Flush: Supported 00:07:52.030 Reservation: Not Supported 00:07:52.030 Namespace Sharing Capabilities: Private 00:07:52.030 Size (in LBAs): 1048576 (4GiB) 00:07:52.030 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.030 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.030 Thin Provisioning: Not Supported 00:07:52.030 Per-NS Atomic Units: No 00:07:52.030 Maximum Single Source Range Length: 128 00:07:52.030 Maximum Copy Length: 128 00:07:52.030 Maximum Source Range Count: 128 00:07:52.030 NGUID/EUI64 Never Reused: No 00:07:52.030 Namespace Write Protected: No 00:07:52.030 Number of LBA Formats: 8 00:07:52.030 Current LBA Format: LBA Format #04 00:07:52.030 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.030 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.030 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.030 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.030 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.030 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.030 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.030 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.030 00:07:52.030 NVM Specific Namespace Data 00:07:52.030 =========================== 00:07:52.030 Logical Block Storage Tag Mask: 0 00:07:52.030 Protection Information Capabilities: 00:07:52.030 16b Guard Protection Information Storage Tag Support: No 00:07:52.030 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.030 Storage Tag Check Read Support: No 00:07:52.030 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Namespace ID:3 00:07:52.030 Error Recovery Timeout: Unlimited 00:07:52.030 Command Set Identifier: NVM (00h) 00:07:52.030 Deallocate: Supported 00:07:52.030 Deallocated/Unwritten Error: Supported 00:07:52.030 Deallocated Read Value: All 0x00 00:07:52.030 Deallocate in Write Zeroes: Not Supported 00:07:52.030 Deallocated Guard Field: 0xFFFF 00:07:52.030 Flush: Supported 00:07:52.030 Reservation: Not Supported 00:07:52.030 Namespace Sharing Capabilities: Private 00:07:52.030 Size (in LBAs): 1048576 (4GiB) 00:07:52.030 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.030 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.030 Thin Provisioning: Not Supported 00:07:52.030 Per-NS Atomic Units: No 00:07:52.030 Maximum Single Source Range Length: 128 00:07:52.030 Maximum Copy Length: 128 00:07:52.030 Maximum Source Range Count: 128 00:07:52.030 NGUID/EUI64 Never Reused: No 00:07:52.030 Namespace Write Protected: No 00:07:52.030 Number of LBA Formats: 8 00:07:52.030 Current LBA Format: LBA Format #04 00:07:52.030 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.030 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.030 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.030 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.030 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.030 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.030 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.030 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.030 00:07:52.030 NVM Specific Namespace Data 00:07:52.030 =========================== 00:07:52.030 Logical Block Storage Tag Mask: 0 00:07:52.030 Protection Information Capabilities: 00:07:52.030 16b Guard Protection Information Storage Tag Support: No 00:07:52.030 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.030 Storage Tag Check Read Support: No 00:07:52.030 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.030 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.030 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:52.296 ===================================================== 00:07:52.296 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.296 ===================================================== 00:07:52.296 Controller Capabilities/Features 00:07:52.296 ================================ 00:07:52.296 Vendor ID: 1b36 00:07:52.296 Subsystem Vendor ID: 1af4 00:07:52.296 Serial Number: 12340 00:07:52.296 Model Number: QEMU NVMe Ctrl 00:07:52.296 Firmware Version: 8.0.0 00:07:52.296 Recommended Arb Burst: 6 00:07:52.296 IEEE OUI Identifier: 00 54 52 00:07:52.296 Multi-path I/O 00:07:52.296 May have multiple subsystem ports: No 00:07:52.296 May have multiple controllers: No 00:07:52.296 Associated with SR-IOV VF: No 00:07:52.296 Max Data Transfer Size: 524288 00:07:52.296 Max Number of Namespaces: 256 00:07:52.296 Max Number of I/O Queues: 64 00:07:52.296 NVMe Specification Version (VS): 1.4 00:07:52.296 NVMe Specification Version (Identify): 1.4 00:07:52.296 Maximum Queue Entries: 2048 00:07:52.296 Contiguous Queues Required: Yes 00:07:52.296 Arbitration Mechanisms Supported 00:07:52.296 Weighted Round Robin: Not Supported 00:07:52.296 Vendor Specific: Not Supported 00:07:52.296 Reset Timeout: 7500 ms 00:07:52.296 Doorbell Stride: 4 bytes 00:07:52.296 NVM Subsystem Reset: Not Supported 00:07:52.296 Command Sets Supported 00:07:52.296 NVM Command Set: Supported 00:07:52.296 Boot Partition: Not Supported 00:07:52.296 Memory Page Size Minimum: 4096 bytes 00:07:52.296 Memory Page Size Maximum: 65536 bytes 00:07:52.296 Persistent Memory Region: Not Supported 00:07:52.296 Optional Asynchronous Events Supported 00:07:52.296 Namespace Attribute Notices: Supported 00:07:52.296 Firmware Activation Notices: Not Supported 00:07:52.296 ANA Change Notices: Not Supported 00:07:52.296 PLE Aggregate Log Change Notices: Not Supported 00:07:52.296 LBA Status Info Alert Notices: Not Supported 00:07:52.296 EGE Aggregate Log Change Notices: Not Supported 00:07:52.296 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.296 Zone Descriptor Change Notices: Not Supported 00:07:52.296 Discovery Log Change Notices: Not Supported 00:07:52.296 Controller Attributes 00:07:52.296 128-bit Host Identifier: Not Supported 00:07:52.296 Non-Operational Permissive Mode: Not Supported 00:07:52.296 NVM Sets: Not Supported 00:07:52.296 Read Recovery Levels: Not Supported 00:07:52.296 Endurance Groups: Not Supported 00:07:52.296 Predictable Latency Mode: Not Supported 00:07:52.296 Traffic Based Keep ALive: Not Supported 00:07:52.296 Namespace Granularity: Not Supported 00:07:52.296 SQ Associations: Not Supported 00:07:52.296 UUID List: Not Supported 00:07:52.296 Multi-Domain Subsystem: Not Supported 00:07:52.296 Fixed Capacity Management: Not Supported 00:07:52.296 Variable Capacity Management: Not Supported 00:07:52.296 Delete Endurance Group: Not Supported 00:07:52.296 Delete NVM Set: Not Supported 00:07:52.296 Extended LBA Formats Supported: Supported 00:07:52.296 Flexible Data Placement Supported: Not Supported 00:07:52.296 00:07:52.296 Controller Memory Buffer Support 00:07:52.296 ================================ 00:07:52.296 Supported: No 00:07:52.296 00:07:52.296 Persistent Memory Region Support 00:07:52.296 ================================ 00:07:52.296 Supported: No 00:07:52.296 00:07:52.296 Admin Command Set Attributes 00:07:52.296 ============================ 00:07:52.296 Security Send/Receive: Not Supported 00:07:52.296 Format NVM: Supported 00:07:52.296 Firmware Activate/Download: Not Supported 00:07:52.296 Namespace Management: Supported 00:07:52.296 Device Self-Test: Not Supported 00:07:52.296 Directives: Supported 00:07:52.296 NVMe-MI: Not Supported 00:07:52.296 Virtualization Management: Not Supported 00:07:52.296 Doorbell Buffer Config: Supported 00:07:52.296 Get LBA Status Capability: Not Supported 00:07:52.296 Command & Feature Lockdown Capability: Not Supported 00:07:52.296 Abort Command Limit: 4 00:07:52.296 Async Event Request Limit: 4 00:07:52.296 Number of Firmware Slots: N/A 00:07:52.296 Firmware Slot 1 Read-Only: N/A 00:07:52.296 Firmware Activation Without Reset: N/A 00:07:52.296 Multiple Update Detection Support: N/A 00:07:52.296 Firmware Update Granularity: No Information Provided 00:07:52.296 Per-Namespace SMART Log: Yes 00:07:52.296 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.296 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:52.296 Command Effects Log Page: Supported 00:07:52.296 Get Log Page Extended Data: Supported 00:07:52.296 Telemetry Log Pages: Not Supported 00:07:52.296 Persistent Event Log Pages: Not Supported 00:07:52.296 Supported Log Pages Log Page: May Support 00:07:52.296 Commands Supported & Effects Log Page: Not Supported 00:07:52.296 Feature Identifiers & Effects Log Page:May Support 00:07:52.296 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.296 Data Area 4 for Telemetry Log: Not Supported 00:07:52.297 Error Log Page Entries Supported: 1 00:07:52.297 Keep Alive: Not Supported 00:07:52.297 00:07:52.297 NVM Command Set Attributes 00:07:52.297 ========================== 00:07:52.297 Submission Queue Entry Size 00:07:52.297 Max: 64 00:07:52.297 Min: 64 00:07:52.297 Completion Queue Entry Size 00:07:52.297 Max: 16 00:07:52.297 Min: 16 00:07:52.297 Number of Namespaces: 256 00:07:52.297 Compare Command: Supported 00:07:52.297 Write Uncorrectable Command: Not Supported 00:07:52.297 Dataset Management Command: Supported 00:07:52.297 Write Zeroes Command: Supported 00:07:52.297 Set Features Save Field: Supported 00:07:52.297 Reservations: Not Supported 00:07:52.297 Timestamp: Supported 00:07:52.297 Copy: Supported 00:07:52.297 Volatile Write Cache: Present 00:07:52.297 Atomic Write Unit (Normal): 1 00:07:52.297 Atomic Write Unit (PFail): 1 00:07:52.297 Atomic Compare & Write Unit: 1 00:07:52.297 Fused Compare & Write: Not Supported 00:07:52.297 Scatter-Gather List 00:07:52.297 SGL Command Set: Supported 00:07:52.297 SGL Keyed: Not Supported 00:07:52.297 SGL Bit Bucket Descriptor: Not Supported 00:07:52.297 SGL Metadata Pointer: Not Supported 00:07:52.297 Oversized SGL: Not Supported 00:07:52.297 SGL Metadata Address: Not Supported 00:07:52.297 SGL Offset: Not Supported 00:07:52.297 Transport SGL Data Block: Not Supported 00:07:52.297 Replay Protected Memory Block: Not Supported 00:07:52.297 00:07:52.297 Firmware Slot Information 00:07:52.297 ========================= 00:07:52.297 Active slot: 1 00:07:52.297 Slot 1 Firmware Revision: 1.0 00:07:52.297 00:07:52.297 00:07:52.297 Commands Supported and Effects 00:07:52.297 ============================== 00:07:52.297 Admin Commands 00:07:52.297 -------------- 00:07:52.297 Delete I/O Submission Queue (00h): Supported 00:07:52.297 Create I/O Submission Queue (01h): Supported 00:07:52.297 Get Log Page (02h): Supported 00:07:52.297 Delete I/O Completion Queue (04h): Supported 00:07:52.297 Create I/O Completion Queue (05h): Supported 00:07:52.297 Identify (06h): Supported 00:07:52.297 Abort (08h): Supported 00:07:52.297 Set Features (09h): Supported 00:07:52.297 Get Features (0Ah): Supported 00:07:52.297 Asynchronous Event Request (0Ch): Supported 00:07:52.297 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.297 Directive Send (19h): Supported 00:07:52.297 Directive Receive (1Ah): Supported 00:07:52.297 Virtualization Management (1Ch): Supported 00:07:52.297 Doorbell Buffer Config (7Ch): Supported 00:07:52.297 Format NVM (80h): Supported LBA-Change 00:07:52.297 I/O Commands 00:07:52.297 ------------ 00:07:52.297 Flush (00h): Supported LBA-Change 00:07:52.297 Write (01h): Supported LBA-Change 00:07:52.297 Read (02h): Supported 00:07:52.297 Compare (05h): Supported 00:07:52.297 Write Zeroes (08h): Supported LBA-Change 00:07:52.297 Dataset Management (09h): Supported LBA-Change 00:07:52.297 Unknown (0Ch): Supported 00:07:52.297 Unknown (12h): Supported 00:07:52.297 Copy (19h): Supported LBA-Change 00:07:52.297 Unknown (1Dh): Supported LBA-Change 00:07:52.297 00:07:52.297 Error Log 00:07:52.297 ========= 00:07:52.297 00:07:52.297 Arbitration 00:07:52.297 =========== 00:07:52.297 Arbitration Burst: no limit 00:07:52.297 00:07:52.297 Power Management 00:07:52.297 ================ 00:07:52.297 Number of Power States: 1 00:07:52.297 Current Power State: Power State #0 00:07:52.297 Power State #0: 00:07:52.297 Max Power: 25.00 W 00:07:52.297 Non-Operational State: Operational 00:07:52.297 Entry Latency: 16 microseconds 00:07:52.297 Exit Latency: 4 microseconds 00:07:52.297 Relative Read Throughput: 0 00:07:52.297 Relative Read Latency: 0 00:07:52.297 Relative Write Throughput: 0 00:07:52.297 Relative Write Latency: 0 00:07:52.297 Idle Power: Not Reported 00:07:52.297 Active Power: Not Reported 00:07:52.297 Non-Operational Permissive Mode: Not Supported 00:07:52.297 00:07:52.297 Health Information 00:07:52.297 ================== 00:07:52.297 Critical Warnings: 00:07:52.297 Available Spare Space: OK 00:07:52.297 Temperature: OK 00:07:52.297 Device Reliability: OK 00:07:52.297 Read Only: No 00:07:52.297 Volatile Memory Backup: OK 00:07:52.297 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.297 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.297 Available Spare: 0% 00:07:52.297 Available Spare Threshold: 0% 00:07:52.297 Life Percentage Used: 0% 00:07:52.297 Data Units Read: 694 00:07:52.297 Data Units Written: 622 00:07:52.297 Host Read Commands: 39162 00:07:52.297 Host Write Commands: 38948 00:07:52.297 Controller Busy Time: 0 minutes 00:07:52.297 Power Cycles: 0 00:07:52.297 Power On Hours: 0 hours 00:07:52.297 Unsafe Shutdowns: 0 00:07:52.297 Unrecoverable Media Errors: 0 00:07:52.297 Lifetime Error Log Entries: 0 00:07:52.297 Warning Temperature Time: 0 minutes 00:07:52.297 Critical Temperature Time: 0 minutes 00:07:52.297 00:07:52.297 Number of Queues 00:07:52.297 ================ 00:07:52.297 Number of I/O Submission Queues: 64 00:07:52.297 Number of I/O Completion Queues: 64 00:07:52.297 00:07:52.297 ZNS Specific Controller Data 00:07:52.297 ============================ 00:07:52.297 Zone Append Size Limit: 0 00:07:52.297 00:07:52.297 00:07:52.297 Active Namespaces 00:07:52.297 ================= 00:07:52.297 Namespace ID:1 00:07:52.297 Error Recovery Timeout: Unlimited 00:07:52.297 Command Set Identifier: NVM (00h) 00:07:52.297 Deallocate: Supported 00:07:52.297 Deallocated/Unwritten Error: Supported 00:07:52.297 Deallocated Read Value: All 0x00 00:07:52.297 Deallocate in Write Zeroes: Not Supported 00:07:52.297 Deallocated Guard Field: 0xFFFF 00:07:52.297 Flush: Supported 00:07:52.297 Reservation: Not Supported 00:07:52.297 Metadata Transferred as: Separate Metadata Buffer 00:07:52.297 Namespace Sharing Capabilities: Private 00:07:52.297 Size (in LBAs): 1548666 (5GiB) 00:07:52.297 Capacity (in LBAs): 1548666 (5GiB) 00:07:52.297 Utilization (in LBAs): 1548666 (5GiB) 00:07:52.297 Thin Provisioning: Not Supported 00:07:52.297 Per-NS Atomic Units: No 00:07:52.297 Maximum Single Source Range Length: 128 00:07:52.297 Maximum Copy Length: 128 00:07:52.297 Maximum Source Range Count: 128 00:07:52.297 NGUID/EUI64 Never Reused: No 00:07:52.297 Namespace Write Protected: No 00:07:52.297 Number of LBA Formats: 8 00:07:52.297 Current LBA Format: LBA Format #07 00:07:52.297 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.297 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.297 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.297 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.297 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.297 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.297 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.297 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.297 00:07:52.297 NVM Specific Namespace Data 00:07:52.297 =========================== 00:07:52.297 Logical Block Storage Tag Mask: 0 00:07:52.297 Protection Information Capabilities: 00:07:52.297 16b Guard Protection Information Storage Tag Support: No 00:07:52.297 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.297 Storage Tag Check Read Support: No 00:07:52.297 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.297 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.297 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:52.297 ===================================================== 00:07:52.297 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.297 ===================================================== 00:07:52.297 Controller Capabilities/Features 00:07:52.297 ================================ 00:07:52.297 Vendor ID: 1b36 00:07:52.297 Subsystem Vendor ID: 1af4 00:07:52.297 Serial Number: 12341 00:07:52.297 Model Number: QEMU NVMe Ctrl 00:07:52.297 Firmware Version: 8.0.0 00:07:52.297 Recommended Arb Burst: 6 00:07:52.297 IEEE OUI Identifier: 00 54 52 00:07:52.297 Multi-path I/O 00:07:52.297 May have multiple subsystem ports: No 00:07:52.297 May have multiple controllers: No 00:07:52.297 Associated with SR-IOV VF: No 00:07:52.297 Max Data Transfer Size: 524288 00:07:52.297 Max Number of Namespaces: 256 00:07:52.297 Max Number of I/O Queues: 64 00:07:52.297 NVMe Specification Version (VS): 1.4 00:07:52.297 NVMe Specification Version (Identify): 1.4 00:07:52.297 Maximum Queue Entries: 2048 00:07:52.297 Contiguous Queues Required: Yes 00:07:52.297 Arbitration Mechanisms Supported 00:07:52.297 Weighted Round Robin: Not Supported 00:07:52.298 Vendor Specific: Not Supported 00:07:52.298 Reset Timeout: 7500 ms 00:07:52.298 Doorbell Stride: 4 bytes 00:07:52.298 NVM Subsystem Reset: Not Supported 00:07:52.298 Command Sets Supported 00:07:52.298 NVM Command Set: Supported 00:07:52.298 Boot Partition: Not Supported 00:07:52.298 Memory Page Size Minimum: 4096 bytes 00:07:52.298 Memory Page Size Maximum: 65536 bytes 00:07:52.298 Persistent Memory Region: Not Supported 00:07:52.298 Optional Asynchronous Events Supported 00:07:52.298 Namespace Attribute Notices: Supported 00:07:52.298 Firmware Activation Notices: Not Supported 00:07:52.298 ANA Change Notices: Not Supported 00:07:52.298 PLE Aggregate Log Change Notices: Not Supported 00:07:52.298 LBA Status Info Alert Notices: Not Supported 00:07:52.298 EGE Aggregate Log Change Notices: Not Supported 00:07:52.298 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.298 Zone Descriptor Change Notices: Not Supported 00:07:52.298 Discovery Log Change Notices: Not Supported 00:07:52.298 Controller Attributes 00:07:52.298 128-bit Host Identifier: Not Supported 00:07:52.298 Non-Operational Permissive Mode: Not Supported 00:07:52.298 NVM Sets: Not Supported 00:07:52.298 Read Recovery Levels: Not Supported 00:07:52.298 Endurance Groups: Not Supported 00:07:52.298 Predictable Latency Mode: Not Supported 00:07:52.298 Traffic Based Keep ALive: Not Supported 00:07:52.298 Namespace Granularity: Not Supported 00:07:52.298 SQ Associations: Not Supported 00:07:52.298 UUID List: Not Supported 00:07:52.298 Multi-Domain Subsystem: Not Supported 00:07:52.298 Fixed Capacity Management: Not Supported 00:07:52.298 Variable Capacity Management: Not Supported 00:07:52.298 Delete Endurance Group: Not Supported 00:07:52.298 Delete NVM Set: Not Supported 00:07:52.298 Extended LBA Formats Supported: Supported 00:07:52.298 Flexible Data Placement Supported: Not Supported 00:07:52.298 00:07:52.298 Controller Memory Buffer Support 00:07:52.298 ================================ 00:07:52.298 Supported: No 00:07:52.298 00:07:52.298 Persistent Memory Region Support 00:07:52.298 ================================ 00:07:52.298 Supported: No 00:07:52.298 00:07:52.298 Admin Command Set Attributes 00:07:52.298 ============================ 00:07:52.298 Security Send/Receive: Not Supported 00:07:52.298 Format NVM: Supported 00:07:52.298 Firmware Activate/Download: Not Supported 00:07:52.298 Namespace Management: Supported 00:07:52.298 Device Self-Test: Not Supported 00:07:52.298 Directives: Supported 00:07:52.298 NVMe-MI: Not Supported 00:07:52.298 Virtualization Management: Not Supported 00:07:52.298 Doorbell Buffer Config: Supported 00:07:52.298 Get LBA Status Capability: Not Supported 00:07:52.298 Command & Feature Lockdown Capability: Not Supported 00:07:52.298 Abort Command Limit: 4 00:07:52.298 Async Event Request Limit: 4 00:07:52.298 Number of Firmware Slots: N/A 00:07:52.298 Firmware Slot 1 Read-Only: N/A 00:07:52.298 Firmware Activation Without Reset: N/A 00:07:52.298 Multiple Update Detection Support: N/A 00:07:52.298 Firmware Update Granularity: No Information Provided 00:07:52.298 Per-Namespace SMART Log: Yes 00:07:52.298 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.298 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:52.298 Command Effects Log Page: Supported 00:07:52.298 Get Log Page Extended Data: Supported 00:07:52.298 Telemetry Log Pages: Not Supported 00:07:52.298 Persistent Event Log Pages: Not Supported 00:07:52.298 Supported Log Pages Log Page: May Support 00:07:52.298 Commands Supported & Effects Log Page: Not Supported 00:07:52.298 Feature Identifiers & Effects Log Page:May Support 00:07:52.298 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.298 Data Area 4 for Telemetry Log: Not Supported 00:07:52.298 Error Log Page Entries Supported: 1 00:07:52.298 Keep Alive: Not Supported 00:07:52.298 00:07:52.298 NVM Command Set Attributes 00:07:52.298 ========================== 00:07:52.298 Submission Queue Entry Size 00:07:52.298 Max: 64 00:07:52.298 Min: 64 00:07:52.298 Completion Queue Entry Size 00:07:52.298 Max: 16 00:07:52.298 Min: 16 00:07:52.298 Number of Namespaces: 256 00:07:52.298 Compare Command: Supported 00:07:52.298 Write Uncorrectable Command: Not Supported 00:07:52.298 Dataset Management Command: Supported 00:07:52.298 Write Zeroes Command: Supported 00:07:52.298 Set Features Save Field: Supported 00:07:52.298 Reservations: Not Supported 00:07:52.298 Timestamp: Supported 00:07:52.298 Copy: Supported 00:07:52.298 Volatile Write Cache: Present 00:07:52.298 Atomic Write Unit (Normal): 1 00:07:52.298 Atomic Write Unit (PFail): 1 00:07:52.298 Atomic Compare & Write Unit: 1 00:07:52.298 Fused Compare & Write: Not Supported 00:07:52.298 Scatter-Gather List 00:07:52.298 SGL Command Set: Supported 00:07:52.298 SGL Keyed: Not Supported 00:07:52.298 SGL Bit Bucket Descriptor: Not Supported 00:07:52.298 SGL Metadata Pointer: Not Supported 00:07:52.298 Oversized SGL: Not Supported 00:07:52.298 SGL Metadata Address: Not Supported 00:07:52.298 SGL Offset: Not Supported 00:07:52.298 Transport SGL Data Block: Not Supported 00:07:52.298 Replay Protected Memory Block: Not Supported 00:07:52.298 00:07:52.298 Firmware Slot Information 00:07:52.298 ========================= 00:07:52.298 Active slot: 1 00:07:52.298 Slot 1 Firmware Revision: 1.0 00:07:52.298 00:07:52.298 00:07:52.298 Commands Supported and Effects 00:07:52.298 ============================== 00:07:52.298 Admin Commands 00:07:52.298 -------------- 00:07:52.298 Delete I/O Submission Queue (00h): Supported 00:07:52.298 Create I/O Submission Queue (01h): Supported 00:07:52.298 Get Log Page (02h): Supported 00:07:52.298 Delete I/O Completion Queue (04h): Supported 00:07:52.298 Create I/O Completion Queue (05h): Supported 00:07:52.298 Identify (06h): Supported 00:07:52.298 Abort (08h): Supported 00:07:52.298 Set Features (09h): Supported 00:07:52.298 Get Features (0Ah): Supported 00:07:52.298 Asynchronous Event Request (0Ch): Supported 00:07:52.298 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.298 Directive Send (19h): Supported 00:07:52.298 Directive Receive (1Ah): Supported 00:07:52.298 Virtualization Management (1Ch): Supported 00:07:52.298 Doorbell Buffer Config (7Ch): Supported 00:07:52.298 Format NVM (80h): Supported LBA-Change 00:07:52.298 I/O Commands 00:07:52.298 ------------ 00:07:52.298 Flush (00h): Supported LBA-Change 00:07:52.298 Write (01h): Supported LBA-Change 00:07:52.298 Read (02h): Supported 00:07:52.298 Compare (05h): Supported 00:07:52.298 Write Zeroes (08h): Supported LBA-Change 00:07:52.298 Dataset Management (09h): Supported LBA-Change 00:07:52.298 Unknown (0Ch): Supported 00:07:52.298 Unknown (12h): Supported 00:07:52.298 Copy (19h): Supported LBA-Change 00:07:52.298 Unknown (1Dh): Supported LBA-Change 00:07:52.298 00:07:52.298 Error Log 00:07:52.298 ========= 00:07:52.298 00:07:52.298 Arbitration 00:07:52.298 =========== 00:07:52.298 Arbitration Burst: no limit 00:07:52.298 00:07:52.298 Power Management 00:07:52.298 ================ 00:07:52.298 Number of Power States: 1 00:07:52.298 Current Power State: Power State #0 00:07:52.298 Power State #0: 00:07:52.298 Max Power: 25.00 W 00:07:52.298 Non-Operational State: Operational 00:07:52.298 Entry Latency: 16 microseconds 00:07:52.298 Exit Latency: 4 microseconds 00:07:52.298 Relative Read Throughput: 0 00:07:52.298 Relative Read Latency: 0 00:07:52.298 Relative Write Throughput: 0 00:07:52.298 Relative Write Latency: 0 00:07:52.298 Idle Power: Not Reported 00:07:52.298 Active Power: Not Reported 00:07:52.298 Non-Operational Permissive Mode: Not Supported 00:07:52.298 00:07:52.298 Health Information 00:07:52.298 ================== 00:07:52.298 Critical Warnings: 00:07:52.298 Available Spare Space: OK 00:07:52.298 Temperature: OK 00:07:52.298 Device Reliability: OK 00:07:52.298 Read Only: No 00:07:52.298 Volatile Memory Backup: OK 00:07:52.298 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.298 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.298 Available Spare: 0% 00:07:52.298 Available Spare Threshold: 0% 00:07:52.298 Life Percentage Used: 0% 00:07:52.298 Data Units Read: 1109 00:07:52.298 Data Units Written: 983 00:07:52.298 Host Read Commands: 58712 00:07:52.298 Host Write Commands: 57608 00:07:52.298 Controller Busy Time: 0 minutes 00:07:52.298 Power Cycles: 0 00:07:52.298 Power On Hours: 0 hours 00:07:52.298 Unsafe Shutdowns: 0 00:07:52.298 Unrecoverable Media Errors: 0 00:07:52.298 Lifetime Error Log Entries: 0 00:07:52.298 Warning Temperature Time: 0 minutes 00:07:52.298 Critical Temperature Time: 0 minutes 00:07:52.298 00:07:52.298 Number of Queues 00:07:52.298 ================ 00:07:52.298 Number of I/O Submission Queues: 64 00:07:52.298 Number of I/O Completion Queues: 64 00:07:52.298 00:07:52.298 ZNS Specific Controller Data 00:07:52.298 ============================ 00:07:52.298 Zone Append Size Limit: 0 00:07:52.298 00:07:52.298 00:07:52.298 Active Namespaces 00:07:52.298 ================= 00:07:52.298 Namespace ID:1 00:07:52.298 Error Recovery Timeout: Unlimited 00:07:52.299 Command Set Identifier: NVM (00h) 00:07:52.299 Deallocate: Supported 00:07:52.299 Deallocated/Unwritten Error: Supported 00:07:52.299 Deallocated Read Value: All 0x00 00:07:52.299 Deallocate in Write Zeroes: Not Supported 00:07:52.299 Deallocated Guard Field: 0xFFFF 00:07:52.299 Flush: Supported 00:07:52.299 Reservation: Not Supported 00:07:52.299 Namespace Sharing Capabilities: Private 00:07:52.299 Size (in LBAs): 1310720 (5GiB) 00:07:52.299 Capacity (in LBAs): 1310720 (5GiB) 00:07:52.299 Utilization (in LBAs): 1310720 (5GiB) 00:07:52.299 Thin Provisioning: Not Supported 00:07:52.299 Per-NS Atomic Units: No 00:07:52.299 Maximum Single Source Range Length: 128 00:07:52.299 Maximum Copy Length: 128 00:07:52.299 Maximum Source Range Count: 128 00:07:52.299 NGUID/EUI64 Never Reused: No 00:07:52.299 Namespace Write Protected: No 00:07:52.299 Number of LBA Formats: 8 00:07:52.299 Current LBA Format: LBA Format #04 00:07:52.299 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.299 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.299 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.299 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.299 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.299 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.299 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.299 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.299 00:07:52.299 NVM Specific Namespace Data 00:07:52.299 =========================== 00:07:52.299 Logical Block Storage Tag Mask: 0 00:07:52.299 Protection Information Capabilities: 00:07:52.299 16b Guard Protection Information Storage Tag Support: No 00:07:52.299 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.299 Storage Tag Check Read Support: No 00:07:52.299 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.299 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.299 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:52.560 ===================================================== 00:07:52.560 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.560 ===================================================== 00:07:52.560 Controller Capabilities/Features 00:07:52.560 ================================ 00:07:52.560 Vendor ID: 1b36 00:07:52.560 Subsystem Vendor ID: 1af4 00:07:52.560 Serial Number: 12342 00:07:52.560 Model Number: QEMU NVMe Ctrl 00:07:52.560 Firmware Version: 8.0.0 00:07:52.560 Recommended Arb Burst: 6 00:07:52.560 IEEE OUI Identifier: 00 54 52 00:07:52.560 Multi-path I/O 00:07:52.560 May have multiple subsystem ports: No 00:07:52.560 May have multiple controllers: No 00:07:52.560 Associated with SR-IOV VF: No 00:07:52.560 Max Data Transfer Size: 524288 00:07:52.560 Max Number of Namespaces: 256 00:07:52.560 Max Number of I/O Queues: 64 00:07:52.560 NVMe Specification Version (VS): 1.4 00:07:52.560 NVMe Specification Version (Identify): 1.4 00:07:52.560 Maximum Queue Entries: 2048 00:07:52.560 Contiguous Queues Required: Yes 00:07:52.560 Arbitration Mechanisms Supported 00:07:52.560 Weighted Round Robin: Not Supported 00:07:52.560 Vendor Specific: Not Supported 00:07:52.560 Reset Timeout: 7500 ms 00:07:52.560 Doorbell Stride: 4 bytes 00:07:52.560 NVM Subsystem Reset: Not Supported 00:07:52.560 Command Sets Supported 00:07:52.560 NVM Command Set: Supported 00:07:52.560 Boot Partition: Not Supported 00:07:52.560 Memory Page Size Minimum: 4096 bytes 00:07:52.560 Memory Page Size Maximum: 65536 bytes 00:07:52.560 Persistent Memory Region: Not Supported 00:07:52.560 Optional Asynchronous Events Supported 00:07:52.560 Namespace Attribute Notices: Supported 00:07:52.560 Firmware Activation Notices: Not Supported 00:07:52.560 ANA Change Notices: Not Supported 00:07:52.560 PLE Aggregate Log Change Notices: Not Supported 00:07:52.560 LBA Status Info Alert Notices: Not Supported 00:07:52.560 EGE Aggregate Log Change Notices: Not Supported 00:07:52.560 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.560 Zone Descriptor Change Notices: Not Supported 00:07:52.560 Discovery Log Change Notices: Not Supported 00:07:52.560 Controller Attributes 00:07:52.560 128-bit Host Identifier: Not Supported 00:07:52.560 Non-Operational Permissive Mode: Not Supported 00:07:52.560 NVM Sets: Not Supported 00:07:52.560 Read Recovery Levels: Not Supported 00:07:52.560 Endurance Groups: Not Supported 00:07:52.560 Predictable Latency Mode: Not Supported 00:07:52.560 Traffic Based Keep ALive: Not Supported 00:07:52.560 Namespace Granularity: Not Supported 00:07:52.560 SQ Associations: Not Supported 00:07:52.560 UUID List: Not Supported 00:07:52.560 Multi-Domain Subsystem: Not Supported 00:07:52.560 Fixed Capacity Management: Not Supported 00:07:52.560 Variable Capacity Management: Not Supported 00:07:52.560 Delete Endurance Group: Not Supported 00:07:52.560 Delete NVM Set: Not Supported 00:07:52.560 Extended LBA Formats Supported: Supported 00:07:52.560 Flexible Data Placement Supported: Not Supported 00:07:52.560 00:07:52.560 Controller Memory Buffer Support 00:07:52.560 ================================ 00:07:52.560 Supported: No 00:07:52.560 00:07:52.560 Persistent Memory Region Support 00:07:52.560 ================================ 00:07:52.560 Supported: No 00:07:52.560 00:07:52.560 Admin Command Set Attributes 00:07:52.560 ============================ 00:07:52.560 Security Send/Receive: Not Supported 00:07:52.560 Format NVM: Supported 00:07:52.560 Firmware Activate/Download: Not Supported 00:07:52.560 Namespace Management: Supported 00:07:52.560 Device Self-Test: Not Supported 00:07:52.560 Directives: Supported 00:07:52.560 NVMe-MI: Not Supported 00:07:52.560 Virtualization Management: Not Supported 00:07:52.560 Doorbell Buffer Config: Supported 00:07:52.560 Get LBA Status Capability: Not Supported 00:07:52.560 Command & Feature Lockdown Capability: Not Supported 00:07:52.560 Abort Command Limit: 4 00:07:52.560 Async Event Request Limit: 4 00:07:52.560 Number of Firmware Slots: N/A 00:07:52.560 Firmware Slot 1 Read-Only: N/A 00:07:52.560 Firmware Activation Without Reset: N/A 00:07:52.560 Multiple Update Detection Support: N/A 00:07:52.560 Firmware Update Granularity: No Information Provided 00:07:52.560 Per-Namespace SMART Log: Yes 00:07:52.560 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.560 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:52.560 Command Effects Log Page: Supported 00:07:52.560 Get Log Page Extended Data: Supported 00:07:52.560 Telemetry Log Pages: Not Supported 00:07:52.560 Persistent Event Log Pages: Not Supported 00:07:52.560 Supported Log Pages Log Page: May Support 00:07:52.560 Commands Supported & Effects Log Page: Not Supported 00:07:52.560 Feature Identifiers & Effects Log Page:May Support 00:07:52.560 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.560 Data Area 4 for Telemetry Log: Not Supported 00:07:52.560 Error Log Page Entries Supported: 1 00:07:52.560 Keep Alive: Not Supported 00:07:52.560 00:07:52.560 NVM Command Set Attributes 00:07:52.560 ========================== 00:07:52.560 Submission Queue Entry Size 00:07:52.560 Max: 64 00:07:52.560 Min: 64 00:07:52.560 Completion Queue Entry Size 00:07:52.560 Max: 16 00:07:52.560 Min: 16 00:07:52.560 Number of Namespaces: 256 00:07:52.560 Compare Command: Supported 00:07:52.560 Write Uncorrectable Command: Not Supported 00:07:52.560 Dataset Management Command: Supported 00:07:52.560 Write Zeroes Command: Supported 00:07:52.560 Set Features Save Field: Supported 00:07:52.560 Reservations: Not Supported 00:07:52.560 Timestamp: Supported 00:07:52.560 Copy: Supported 00:07:52.560 Volatile Write Cache: Present 00:07:52.560 Atomic Write Unit (Normal): 1 00:07:52.560 Atomic Write Unit (PFail): 1 00:07:52.560 Atomic Compare & Write Unit: 1 00:07:52.560 Fused Compare & Write: Not Supported 00:07:52.560 Scatter-Gather List 00:07:52.560 SGL Command Set: Supported 00:07:52.560 SGL Keyed: Not Supported 00:07:52.560 SGL Bit Bucket Descriptor: Not Supported 00:07:52.560 SGL Metadata Pointer: Not Supported 00:07:52.560 Oversized SGL: Not Supported 00:07:52.560 SGL Metadata Address: Not Supported 00:07:52.560 SGL Offset: Not Supported 00:07:52.560 Transport SGL Data Block: Not Supported 00:07:52.560 Replay Protected Memory Block: Not Supported 00:07:52.560 00:07:52.560 Firmware Slot Information 00:07:52.560 ========================= 00:07:52.560 Active slot: 1 00:07:52.560 Slot 1 Firmware Revision: 1.0 00:07:52.560 00:07:52.560 00:07:52.560 Commands Supported and Effects 00:07:52.560 ============================== 00:07:52.560 Admin Commands 00:07:52.560 -------------- 00:07:52.560 Delete I/O Submission Queue (00h): Supported 00:07:52.560 Create I/O Submission Queue (01h): Supported 00:07:52.560 Get Log Page (02h): Supported 00:07:52.560 Delete I/O Completion Queue (04h): Supported 00:07:52.560 Create I/O Completion Queue (05h): Supported 00:07:52.560 Identify (06h): Supported 00:07:52.560 Abort (08h): Supported 00:07:52.560 Set Features (09h): Supported 00:07:52.560 Get Features (0Ah): Supported 00:07:52.560 Asynchronous Event Request (0Ch): Supported 00:07:52.560 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.560 Directive Send (19h): Supported 00:07:52.560 Directive Receive (1Ah): Supported 00:07:52.560 Virtualization Management (1Ch): Supported 00:07:52.560 Doorbell Buffer Config (7Ch): Supported 00:07:52.560 Format NVM (80h): Supported LBA-Change 00:07:52.560 I/O Commands 00:07:52.560 ------------ 00:07:52.560 Flush (00h): Supported LBA-Change 00:07:52.561 Write (01h): Supported LBA-Change 00:07:52.561 Read (02h): Supported 00:07:52.561 Compare (05h): Supported 00:07:52.561 Write Zeroes (08h): Supported LBA-Change 00:07:52.561 Dataset Management (09h): Supported LBA-Change 00:07:52.561 Unknown (0Ch): Supported 00:07:52.561 Unknown (12h): Supported 00:07:52.561 Copy (19h): Supported LBA-Change 00:07:52.561 Unknown (1Dh): Supported LBA-Change 00:07:52.561 00:07:52.561 Error Log 00:07:52.561 ========= 00:07:52.561 00:07:52.561 Arbitration 00:07:52.561 =========== 00:07:52.561 Arbitration Burst: no limit 00:07:52.561 00:07:52.561 Power Management 00:07:52.561 ================ 00:07:52.561 Number of Power States: 1 00:07:52.561 Current Power State: Power State #0 00:07:52.561 Power State #0: 00:07:52.561 Max Power: 25.00 W 00:07:52.561 Non-Operational State: Operational 00:07:52.561 Entry Latency: 16 microseconds 00:07:52.561 Exit Latency: 4 microseconds 00:07:52.561 Relative Read Throughput: 0 00:07:52.561 Relative Read Latency: 0 00:07:52.561 Relative Write Throughput: 0 00:07:52.561 Relative Write Latency: 0 00:07:52.561 Idle Power: Not Reported 00:07:52.561 Active Power: Not Reported 00:07:52.561 Non-Operational Permissive Mode: Not Supported 00:07:52.561 00:07:52.561 Health Information 00:07:52.561 ================== 00:07:52.561 Critical Warnings: 00:07:52.561 Available Spare Space: OK 00:07:52.561 Temperature: OK 00:07:52.561 Device Reliability: OK 00:07:52.561 Read Only: No 00:07:52.561 Volatile Memory Backup: OK 00:07:52.561 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.561 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.561 Available Spare: 0% 00:07:52.561 Available Spare Threshold: 0% 00:07:52.561 Life Percentage Used: 0% 00:07:52.561 Data Units Read: 2307 00:07:52.561 Data Units Written: 2094 00:07:52.561 Host Read Commands: 120385 00:07:52.561 Host Write Commands: 118654 00:07:52.561 Controller Busy Time: 0 minutes 00:07:52.561 Power Cycles: 0 00:07:52.561 Power On Hours: 0 hours 00:07:52.561 Unsafe Shutdowns: 0 00:07:52.561 Unrecoverable Media Errors: 0 00:07:52.561 Lifetime Error Log Entries: 0 00:07:52.561 Warning Temperature Time: 0 minutes 00:07:52.561 Critical Temperature Time: 0 minutes 00:07:52.561 00:07:52.561 Number of Queues 00:07:52.561 ================ 00:07:52.561 Number of I/O Submission Queues: 64 00:07:52.561 Number of I/O Completion Queues: 64 00:07:52.561 00:07:52.561 ZNS Specific Controller Data 00:07:52.561 ============================ 00:07:52.561 Zone Append Size Limit: 0 00:07:52.561 00:07:52.561 00:07:52.561 Active Namespaces 00:07:52.561 ================= 00:07:52.561 Namespace ID:1 00:07:52.561 Error Recovery Timeout: Unlimited 00:07:52.561 Command Set Identifier: NVM (00h) 00:07:52.561 Deallocate: Supported 00:07:52.561 Deallocated/Unwritten Error: Supported 00:07:52.561 Deallocated Read Value: All 0x00 00:07:52.561 Deallocate in Write Zeroes: Not Supported 00:07:52.561 Deallocated Guard Field: 0xFFFF 00:07:52.561 Flush: Supported 00:07:52.561 Reservation: Not Supported 00:07:52.561 Namespace Sharing Capabilities: Private 00:07:52.561 Size (in LBAs): 1048576 (4GiB) 00:07:52.561 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.561 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.561 Thin Provisioning: Not Supported 00:07:52.561 Per-NS Atomic Units: No 00:07:52.561 Maximum Single Source Range Length: 128 00:07:52.561 Maximum Copy Length: 128 00:07:52.561 Maximum Source Range Count: 128 00:07:52.561 NGUID/EUI64 Never Reused: No 00:07:52.561 Namespace Write Protected: No 00:07:52.561 Number of LBA Formats: 8 00:07:52.561 Current LBA Format: LBA Format #04 00:07:52.561 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.561 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.561 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.561 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.561 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.561 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.561 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.561 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.561 00:07:52.561 NVM Specific Namespace Data 00:07:52.561 =========================== 00:07:52.561 Logical Block Storage Tag Mask: 0 00:07:52.561 Protection Information Capabilities: 00:07:52.561 16b Guard Protection Information Storage Tag Support: No 00:07:52.561 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.561 Storage Tag Check Read Support: No 00:07:52.561 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Namespace ID:2 00:07:52.561 Error Recovery Timeout: Unlimited 00:07:52.561 Command Set Identifier: NVM (00h) 00:07:52.561 Deallocate: Supported 00:07:52.561 Deallocated/Unwritten Error: Supported 00:07:52.561 Deallocated Read Value: All 0x00 00:07:52.561 Deallocate in Write Zeroes: Not Supported 00:07:52.561 Deallocated Guard Field: 0xFFFF 00:07:52.561 Flush: Supported 00:07:52.561 Reservation: Not Supported 00:07:52.561 Namespace Sharing Capabilities: Private 00:07:52.561 Size (in LBAs): 1048576 (4GiB) 00:07:52.561 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.561 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.561 Thin Provisioning: Not Supported 00:07:52.561 Per-NS Atomic Units: No 00:07:52.561 Maximum Single Source Range Length: 128 00:07:52.561 Maximum Copy Length: 128 00:07:52.561 Maximum Source Range Count: 128 00:07:52.561 NGUID/EUI64 Never Reused: No 00:07:52.561 Namespace Write Protected: No 00:07:52.561 Number of LBA Formats: 8 00:07:52.561 Current LBA Format: LBA Format #04 00:07:52.561 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.561 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.561 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.561 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.561 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.561 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.561 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.561 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.561 00:07:52.561 NVM Specific Namespace Data 00:07:52.561 =========================== 00:07:52.561 Logical Block Storage Tag Mask: 0 00:07:52.561 Protection Information Capabilities: 00:07:52.561 16b Guard Protection Information Storage Tag Support: No 00:07:52.561 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.561 Storage Tag Check Read Support: No 00:07:52.561 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.561 Namespace ID:3 00:07:52.561 Error Recovery Timeout: Unlimited 00:07:52.561 Command Set Identifier: NVM (00h) 00:07:52.561 Deallocate: Supported 00:07:52.561 Deallocated/Unwritten Error: Supported 00:07:52.561 Deallocated Read Value: All 0x00 00:07:52.561 Deallocate in Write Zeroes: Not Supported 00:07:52.561 Deallocated Guard Field: 0xFFFF 00:07:52.561 Flush: Supported 00:07:52.561 Reservation: Not Supported 00:07:52.561 Namespace Sharing Capabilities: Private 00:07:52.561 Size (in LBAs): 1048576 (4GiB) 00:07:52.561 Capacity (in LBAs): 1048576 (4GiB) 00:07:52.561 Utilization (in LBAs): 1048576 (4GiB) 00:07:52.561 Thin Provisioning: Not Supported 00:07:52.561 Per-NS Atomic Units: No 00:07:52.561 Maximum Single Source Range Length: 128 00:07:52.561 Maximum Copy Length: 128 00:07:52.561 Maximum Source Range Count: 128 00:07:52.561 NGUID/EUI64 Never Reused: No 00:07:52.561 Namespace Write Protected: No 00:07:52.561 Number of LBA Formats: 8 00:07:52.561 Current LBA Format: LBA Format #04 00:07:52.561 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.561 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.561 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.561 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.561 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.561 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.561 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.562 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.562 00:07:52.562 NVM Specific Namespace Data 00:07:52.562 =========================== 00:07:52.562 Logical Block Storage Tag Mask: 0 00:07:52.562 Protection Information Capabilities: 00:07:52.562 16b Guard Protection Information Storage Tag Support: No 00:07:52.562 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.562 Storage Tag Check Read Support: No 00:07:52.562 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.562 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:52.562 18:02:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:52.821 ===================================================== 00:07:52.821 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.821 ===================================================== 00:07:52.821 Controller Capabilities/Features 00:07:52.821 ================================ 00:07:52.821 Vendor ID: 1b36 00:07:52.821 Subsystem Vendor ID: 1af4 00:07:52.821 Serial Number: 12343 00:07:52.821 Model Number: QEMU NVMe Ctrl 00:07:52.821 Firmware Version: 8.0.0 00:07:52.821 Recommended Arb Burst: 6 00:07:52.821 IEEE OUI Identifier: 00 54 52 00:07:52.821 Multi-path I/O 00:07:52.821 May have multiple subsystem ports: No 00:07:52.821 May have multiple controllers: Yes 00:07:52.821 Associated with SR-IOV VF: No 00:07:52.821 Max Data Transfer Size: 524288 00:07:52.821 Max Number of Namespaces: 256 00:07:52.821 Max Number of I/O Queues: 64 00:07:52.821 NVMe Specification Version (VS): 1.4 00:07:52.821 NVMe Specification Version (Identify): 1.4 00:07:52.821 Maximum Queue Entries: 2048 00:07:52.821 Contiguous Queues Required: Yes 00:07:52.821 Arbitration Mechanisms Supported 00:07:52.821 Weighted Round Robin: Not Supported 00:07:52.821 Vendor Specific: Not Supported 00:07:52.821 Reset Timeout: 7500 ms 00:07:52.821 Doorbell Stride: 4 bytes 00:07:52.821 NVM Subsystem Reset: Not Supported 00:07:52.821 Command Sets Supported 00:07:52.821 NVM Command Set: Supported 00:07:52.821 Boot Partition: Not Supported 00:07:52.821 Memory Page Size Minimum: 4096 bytes 00:07:52.821 Memory Page Size Maximum: 65536 bytes 00:07:52.821 Persistent Memory Region: Not Supported 00:07:52.821 Optional Asynchronous Events Supported 00:07:52.821 Namespace Attribute Notices: Supported 00:07:52.821 Firmware Activation Notices: Not Supported 00:07:52.821 ANA Change Notices: Not Supported 00:07:52.821 PLE Aggregate Log Change Notices: Not Supported 00:07:52.821 LBA Status Info Alert Notices: Not Supported 00:07:52.821 EGE Aggregate Log Change Notices: Not Supported 00:07:52.821 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.821 Zone Descriptor Change Notices: Not Supported 00:07:52.821 Discovery Log Change Notices: Not Supported 00:07:52.821 Controller Attributes 00:07:52.821 128-bit Host Identifier: Not Supported 00:07:52.821 Non-Operational Permissive Mode: Not Supported 00:07:52.821 NVM Sets: Not Supported 00:07:52.821 Read Recovery Levels: Not Supported 00:07:52.821 Endurance Groups: Supported 00:07:52.821 Predictable Latency Mode: Not Supported 00:07:52.821 Traffic Based Keep ALive: Not Supported 00:07:52.821 Namespace Granularity: Not Supported 00:07:52.821 SQ Associations: Not Supported 00:07:52.821 UUID List: Not Supported 00:07:52.821 Multi-Domain Subsystem: Not Supported 00:07:52.821 Fixed Capacity Management: Not Supported 00:07:52.821 Variable Capacity Management: Not Supported 00:07:52.821 Delete Endurance Group: Not Supported 00:07:52.821 Delete NVM Set: Not Supported 00:07:52.821 Extended LBA Formats Supported: Supported 00:07:52.821 Flexible Data Placement Supported: Supported 00:07:52.821 00:07:52.821 Controller Memory Buffer Support 00:07:52.821 ================================ 00:07:52.821 Supported: No 00:07:52.821 00:07:52.821 Persistent Memory Region Support 00:07:52.821 ================================ 00:07:52.821 Supported: No 00:07:52.821 00:07:52.821 Admin Command Set Attributes 00:07:52.821 ============================ 00:07:52.822 Security Send/Receive: Not Supported 00:07:52.822 Format NVM: Supported 00:07:52.822 Firmware Activate/Download: Not Supported 00:07:52.822 Namespace Management: Supported 00:07:52.822 Device Self-Test: Not Supported 00:07:52.822 Directives: Supported 00:07:52.822 NVMe-MI: Not Supported 00:07:52.822 Virtualization Management: Not Supported 00:07:52.822 Doorbell Buffer Config: Supported 00:07:52.822 Get LBA Status Capability: Not Supported 00:07:52.822 Command & Feature Lockdown Capability: Not Supported 00:07:52.822 Abort Command Limit: 4 00:07:52.822 Async Event Request Limit: 4 00:07:52.822 Number of Firmware Slots: N/A 00:07:52.822 Firmware Slot 1 Read-Only: N/A 00:07:52.822 Firmware Activation Without Reset: N/A 00:07:52.822 Multiple Update Detection Support: N/A 00:07:52.822 Firmware Update Granularity: No Information Provided 00:07:52.822 Per-Namespace SMART Log: Yes 00:07:52.822 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.822 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:52.822 Command Effects Log Page: Supported 00:07:52.822 Get Log Page Extended Data: Supported 00:07:52.822 Telemetry Log Pages: Not Supported 00:07:52.822 Persistent Event Log Pages: Not Supported 00:07:52.822 Supported Log Pages Log Page: May Support 00:07:52.822 Commands Supported & Effects Log Page: Not Supported 00:07:52.822 Feature Identifiers & Effects Log Page:May Support 00:07:52.822 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.822 Data Area 4 for Telemetry Log: Not Supported 00:07:52.822 Error Log Page Entries Supported: 1 00:07:52.822 Keep Alive: Not Supported 00:07:52.822 00:07:52.822 NVM Command Set Attributes 00:07:52.822 ========================== 00:07:52.822 Submission Queue Entry Size 00:07:52.822 Max: 64 00:07:52.822 Min: 64 00:07:52.822 Completion Queue Entry Size 00:07:52.822 Max: 16 00:07:52.822 Min: 16 00:07:52.822 Number of Namespaces: 256 00:07:52.822 Compare Command: Supported 00:07:52.822 Write Uncorrectable Command: Not Supported 00:07:52.822 Dataset Management Command: Supported 00:07:52.822 Write Zeroes Command: Supported 00:07:52.822 Set Features Save Field: Supported 00:07:52.822 Reservations: Not Supported 00:07:52.822 Timestamp: Supported 00:07:52.822 Copy: Supported 00:07:52.822 Volatile Write Cache: Present 00:07:52.822 Atomic Write Unit (Normal): 1 00:07:52.822 Atomic Write Unit (PFail): 1 00:07:52.822 Atomic Compare & Write Unit: 1 00:07:52.822 Fused Compare & Write: Not Supported 00:07:52.822 Scatter-Gather List 00:07:52.822 SGL Command Set: Supported 00:07:52.822 SGL Keyed: Not Supported 00:07:52.822 SGL Bit Bucket Descriptor: Not Supported 00:07:52.822 SGL Metadata Pointer: Not Supported 00:07:52.822 Oversized SGL: Not Supported 00:07:52.822 SGL Metadata Address: Not Supported 00:07:52.822 SGL Offset: Not Supported 00:07:52.822 Transport SGL Data Block: Not Supported 00:07:52.822 Replay Protected Memory Block: Not Supported 00:07:52.822 00:07:52.822 Firmware Slot Information 00:07:52.822 ========================= 00:07:52.822 Active slot: 1 00:07:52.822 Slot 1 Firmware Revision: 1.0 00:07:52.822 00:07:52.822 00:07:52.822 Commands Supported and Effects 00:07:52.822 ============================== 00:07:52.822 Admin Commands 00:07:52.822 -------------- 00:07:52.822 Delete I/O Submission Queue (00h): Supported 00:07:52.822 Create I/O Submission Queue (01h): Supported 00:07:52.822 Get Log Page (02h): Supported 00:07:52.822 Delete I/O Completion Queue (04h): Supported 00:07:52.822 Create I/O Completion Queue (05h): Supported 00:07:52.822 Identify (06h): Supported 00:07:52.822 Abort (08h): Supported 00:07:52.822 Set Features (09h): Supported 00:07:52.822 Get Features (0Ah): Supported 00:07:52.822 Asynchronous Event Request (0Ch): Supported 00:07:52.822 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.822 Directive Send (19h): Supported 00:07:52.822 Directive Receive (1Ah): Supported 00:07:52.822 Virtualization Management (1Ch): Supported 00:07:52.822 Doorbell Buffer Config (7Ch): Supported 00:07:52.822 Format NVM (80h): Supported LBA-Change 00:07:52.822 I/O Commands 00:07:52.822 ------------ 00:07:52.822 Flush (00h): Supported LBA-Change 00:07:52.822 Write (01h): Supported LBA-Change 00:07:52.822 Read (02h): Supported 00:07:52.822 Compare (05h): Supported 00:07:52.822 Write Zeroes (08h): Supported LBA-Change 00:07:52.822 Dataset Management (09h): Supported LBA-Change 00:07:52.822 Unknown (0Ch): Supported 00:07:52.822 Unknown (12h): Supported 00:07:52.822 Copy (19h): Supported LBA-Change 00:07:52.822 Unknown (1Dh): Supported LBA-Change 00:07:52.822 00:07:52.822 Error Log 00:07:52.822 ========= 00:07:52.822 00:07:52.822 Arbitration 00:07:52.822 =========== 00:07:52.822 Arbitration Burst: no limit 00:07:52.822 00:07:52.822 Power Management 00:07:52.822 ================ 00:07:52.822 Number of Power States: 1 00:07:52.822 Current Power State: Power State #0 00:07:52.822 Power State #0: 00:07:52.822 Max Power: 25.00 W 00:07:52.822 Non-Operational State: Operational 00:07:52.822 Entry Latency: 16 microseconds 00:07:52.822 Exit Latency: 4 microseconds 00:07:52.822 Relative Read Throughput: 0 00:07:52.822 Relative Read Latency: 0 00:07:52.822 Relative Write Throughput: 0 00:07:52.822 Relative Write Latency: 0 00:07:52.822 Idle Power: Not Reported 00:07:52.822 Active Power: Not Reported 00:07:52.822 Non-Operational Permissive Mode: Not Supported 00:07:52.822 00:07:52.822 Health Information 00:07:52.822 ================== 00:07:52.822 Critical Warnings: 00:07:52.822 Available Spare Space: OK 00:07:52.822 Temperature: OK 00:07:52.822 Device Reliability: OK 00:07:52.822 Read Only: No 00:07:52.822 Volatile Memory Backup: OK 00:07:52.822 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.822 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.822 Available Spare: 0% 00:07:52.822 Available Spare Threshold: 0% 00:07:52.822 Life Percentage Used: 0% 00:07:52.822 Data Units Read: 857 00:07:52.822 Data Units Written: 786 00:07:52.822 Host Read Commands: 40915 00:07:52.822 Host Write Commands: 40338 00:07:52.822 Controller Busy Time: 0 minutes 00:07:52.822 Power Cycles: 0 00:07:52.822 Power On Hours: 0 hours 00:07:52.822 Unsafe Shutdowns: 0 00:07:52.822 Unrecoverable Media Errors: 0 00:07:52.822 Lifetime Error Log Entries: 0 00:07:52.822 Warning Temperature Time: 0 minutes 00:07:52.822 Critical Temperature Time: 0 minutes 00:07:52.822 00:07:52.822 Number of Queues 00:07:52.822 ================ 00:07:52.822 Number of I/O Submission Queues: 64 00:07:52.822 Number of I/O Completion Queues: 64 00:07:52.822 00:07:52.822 ZNS Specific Controller Data 00:07:52.822 ============================ 00:07:52.822 Zone Append Size Limit: 0 00:07:52.822 00:07:52.822 00:07:52.822 Active Namespaces 00:07:52.822 ================= 00:07:52.822 Namespace ID:1 00:07:52.822 Error Recovery Timeout: Unlimited 00:07:52.822 Command Set Identifier: NVM (00h) 00:07:52.822 Deallocate: Supported 00:07:52.822 Deallocated/Unwritten Error: Supported 00:07:52.822 Deallocated Read Value: All 0x00 00:07:52.822 Deallocate in Write Zeroes: Not Supported 00:07:52.823 Deallocated Guard Field: 0xFFFF 00:07:52.823 Flush: Supported 00:07:52.823 Reservation: Not Supported 00:07:52.823 Namespace Sharing Capabilities: Multiple Controllers 00:07:52.823 Size (in LBAs): 262144 (1GiB) 00:07:52.823 Capacity (in LBAs): 262144 (1GiB) 00:07:52.823 Utilization (in LBAs): 262144 (1GiB) 00:07:52.823 Thin Provisioning: Not Supported 00:07:52.823 Per-NS Atomic Units: No 00:07:52.823 Maximum Single Source Range Length: 128 00:07:52.823 Maximum Copy Length: 128 00:07:52.823 Maximum Source Range Count: 128 00:07:52.823 NGUID/EUI64 Never Reused: No 00:07:52.823 Namespace Write Protected: No 00:07:52.823 Endurance group ID: 1 00:07:52.823 Number of LBA Formats: 8 00:07:52.823 Current LBA Format: LBA Format #04 00:07:52.823 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.823 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.823 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.823 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.823 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.823 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.823 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.823 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.823 00:07:52.823 Get Feature FDP: 00:07:52.823 ================ 00:07:52.823 Enabled: Yes 00:07:52.823 FDP configuration index: 0 00:07:52.823 00:07:52.823 FDP configurations log page 00:07:52.823 =========================== 00:07:52.823 Number of FDP configurations: 1 00:07:52.823 Version: 0 00:07:52.823 Size: 112 00:07:52.823 FDP Configuration Descriptor: 0 00:07:52.823 Descriptor Size: 96 00:07:52.823 Reclaim Group Identifier format: 2 00:07:52.823 FDP Volatile Write Cache: Not Present 00:07:52.823 FDP Configuration: Valid 00:07:52.823 Vendor Specific Size: 0 00:07:52.823 Number of Reclaim Groups: 2 00:07:52.823 Number of Recalim Unit Handles: 8 00:07:52.823 Max Placement Identifiers: 128 00:07:52.823 Number of Namespaces Suppprted: 256 00:07:52.823 Reclaim unit Nominal Size: 6000000 bytes 00:07:52.823 Estimated Reclaim Unit Time Limit: Not Reported 00:07:52.823 RUH Desc #000: RUH Type: Initially Isolated 00:07:52.823 RUH Desc #001: RUH Type: Initially Isolated 00:07:52.823 RUH Desc #002: RUH Type: Initially Isolated 00:07:52.823 RUH Desc #003: RUH Type: Initially Isolated 00:07:52.823 RUH Desc #004: RUH Type: Initially Isolated 00:07:52.823 RUH Desc #005: RUH Type: Initially Isolated 00:07:52.823 RUH Desc #006: RUH Type: Initially Isolated 00:07:52.823 RUH Desc #007: RUH Type: Initially Isolated 00:07:52.823 00:07:52.823 FDP reclaim unit handle usage log page 00:07:52.823 ====================================== 00:07:52.823 Number of Reclaim Unit Handles: 8 00:07:52.823 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:52.823 RUH Usage Desc #001: RUH Attributes: Unused 00:07:52.823 RUH Usage Desc #002: RUH Attributes: Unused 00:07:52.823 RUH Usage Desc #003: RUH Attributes: Unused 00:07:52.823 RUH Usage Desc #004: RUH Attributes: Unused 00:07:52.823 RUH Usage Desc #005: RUH Attributes: Unused 00:07:52.823 RUH Usage Desc #006: RUH Attributes: Unused 00:07:52.823 RUH Usage Desc #007: RUH Attributes: Unused 00:07:52.823 00:07:52.823 FDP statistics log page 00:07:52.823 ======================= 00:07:52.823 Host bytes with metadata written: 505389056 00:07:52.823 Media bytes with metadata written: 505446400 00:07:52.823 Media bytes erased: 0 00:07:52.823 00:07:52.823 FDP events log page 00:07:52.823 =================== 00:07:52.823 Number of FDP events: 0 00:07:52.823 00:07:52.823 NVM Specific Namespace Data 00:07:52.823 =========================== 00:07:52.823 Logical Block Storage Tag Mask: 0 00:07:52.823 Protection Information Capabilities: 00:07:52.823 16b Guard Protection Information Storage Tag Support: No 00:07:52.823 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.823 Storage Tag Check Read Support: No 00:07:52.823 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.823 00:07:52.823 real 0m0.998s 00:07:52.823 user 0m0.365s 00:07:52.823 sys 0m0.428s 00:07:52.823 18:02:27 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.823 18:02:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:52.823 ************************************ 00:07:52.823 END TEST nvme_identify 00:07:52.823 ************************************ 00:07:52.823 18:02:27 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:52.823 18:02:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.823 18:02:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.823 18:02:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.823 ************************************ 00:07:52.823 START TEST nvme_perf 00:07:52.823 ************************************ 00:07:52.823 18:02:27 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:52.823 18:02:27 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:54.209 Initializing NVMe Controllers 00:07:54.209 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.209 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.209 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.209 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.209 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:54.209 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:54.209 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:54.209 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:54.209 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:54.209 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:54.209 Initialization complete. Launching workers. 00:07:54.209 ======================================================== 00:07:54.209 Latency(us) 00:07:54.209 Device Information : IOPS MiB/s Average min max 00:07:54.209 PCIE (0000:00:10.0) NSID 1 from core 0: 11509.65 134.88 11127.30 6682.53 24155.11 00:07:54.209 PCIE (0000:00:11.0) NSID 1 from core 0: 11509.65 134.88 11121.41 6261.63 23768.07 00:07:54.209 PCIE (0000:00:13.0) NSID 1 from core 0: 11509.65 134.88 11112.60 5368.61 23788.11 00:07:54.209 PCIE (0000:00:12.0) NSID 1 from core 0: 11509.65 134.88 11103.32 4903.06 23441.76 00:07:54.209 PCIE (0000:00:12.0) NSID 2 from core 0: 11509.65 134.88 11094.17 4521.68 23038.23 00:07:54.209 PCIE (0000:00:12.0) NSID 3 from core 0: 11509.65 134.88 11085.40 4029.16 22652.27 00:07:54.209 ======================================================== 00:07:54.209 Total : 69057.92 809.27 11107.37 4029.16 24155.11 00:07:54.209 00:07:54.209 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.209 ================================================================================= 00:07:54.209 1.00000% : 8217.206us 00:07:54.209 10.00000% : 9225.452us 00:07:54.209 25.00000% : 9679.163us 00:07:54.209 50.00000% : 10233.698us 00:07:54.209 75.00000% : 11443.594us 00:07:54.209 90.00000% : 15325.342us 00:07:54.209 95.00000% : 16736.886us 00:07:54.209 98.00000% : 17644.308us 00:07:54.209 99.00000% : 19559.975us 00:07:54.209 99.50000% : 22887.188us 00:07:54.209 99.90000% : 23895.434us 00:07:54.209 99.99000% : 24197.908us 00:07:54.209 99.99900% : 24197.908us 00:07:54.209 99.99990% : 24197.908us 00:07:54.209 99.99999% : 24197.908us 00:07:54.209 00:07:54.209 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.209 ================================================================================= 00:07:54.209 1.00000% : 8217.206us 00:07:54.209 10.00000% : 9275.865us 00:07:54.209 25.00000% : 9729.575us 00:07:54.209 50.00000% : 10233.698us 00:07:54.209 75.00000% : 11241.945us 00:07:54.209 90.00000% : 15224.517us 00:07:54.209 95.00000% : 16736.886us 00:07:54.209 98.00000% : 17644.308us 00:07:54.209 99.00000% : 20064.098us 00:07:54.209 99.50000% : 22685.538us 00:07:54.209 99.90000% : 23592.960us 00:07:54.209 99.99000% : 23794.609us 00:07:54.209 99.99900% : 23794.609us 00:07:54.209 99.99990% : 23794.609us 00:07:54.209 99.99999% : 23794.609us 00:07:54.209 00:07:54.209 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.209 ================================================================================= 00:07:54.209 1.00000% : 8318.031us 00:07:54.209 10.00000% : 9275.865us 00:07:54.209 25.00000% : 9679.163us 00:07:54.209 50.00000% : 10183.286us 00:07:54.209 75.00000% : 11292.357us 00:07:54.209 90.00000% : 15426.166us 00:07:54.209 95.00000% : 16535.237us 00:07:54.209 98.00000% : 17845.957us 00:07:54.209 99.00000% : 19660.800us 00:07:54.209 99.50000% : 22685.538us 00:07:54.209 99.90000% : 23592.960us 00:07:54.209 99.99000% : 23794.609us 00:07:54.209 99.99900% : 23794.609us 00:07:54.209 99.99990% : 23794.609us 00:07:54.209 99.99999% : 23794.609us 00:07:54.209 00:07:54.209 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.209 ================================================================================= 00:07:54.209 1.00000% : 8318.031us 00:07:54.209 10.00000% : 9275.865us 00:07:54.209 25.00000% : 9679.163us 00:07:54.209 50.00000% : 10233.698us 00:07:54.209 75.00000% : 11191.532us 00:07:54.209 90.00000% : 15426.166us 00:07:54.209 95.00000% : 16434.412us 00:07:54.209 98.00000% : 17946.782us 00:07:54.209 99.00000% : 19358.326us 00:07:54.209 99.50000% : 22282.240us 00:07:54.209 99.90000% : 23290.486us 00:07:54.209 99.99000% : 23492.135us 00:07:54.209 99.99900% : 23492.135us 00:07:54.209 99.99990% : 23492.135us 00:07:54.209 99.99999% : 23492.135us 00:07:54.209 00:07:54.209 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.209 ================================================================================= 00:07:54.209 1.00000% : 8267.618us 00:07:54.209 10.00000% : 9275.865us 00:07:54.209 25.00000% : 9679.163us 00:07:54.209 50.00000% : 10233.698us 00:07:54.209 75.00000% : 11241.945us 00:07:54.209 90.00000% : 15426.166us 00:07:54.209 95.00000% : 16636.062us 00:07:54.209 98.00000% : 17644.308us 00:07:54.209 99.00000% : 19257.502us 00:07:54.209 99.50000% : 21979.766us 00:07:54.209 99.90000% : 22786.363us 00:07:54.209 99.99000% : 22988.012us 00:07:54.209 99.99900% : 23088.837us 00:07:54.209 99.99990% : 23088.837us 00:07:54.209 99.99999% : 23088.837us 00:07:54.209 00:07:54.209 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.209 ================================================================================= 00:07:54.209 1.00000% : 7965.145us 00:07:54.209 10.00000% : 9275.865us 00:07:54.209 25.00000% : 9679.163us 00:07:54.209 50.00000% : 10233.698us 00:07:54.209 75.00000% : 11342.769us 00:07:54.209 90.00000% : 15426.166us 00:07:54.209 95.00000% : 16736.886us 00:07:54.209 98.00000% : 17644.308us 00:07:54.209 99.00000% : 19459.151us 00:07:54.209 99.50000% : 21576.468us 00:07:54.209 99.90000% : 22483.889us 00:07:54.209 99.99000% : 22685.538us 00:07:54.209 99.99900% : 22685.538us 00:07:54.209 99.99990% : 22685.538us 00:07:54.209 99.99999% : 22685.538us 00:07:54.209 00:07:54.209 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.209 ============================================================================== 00:07:54.209 Range in us Cumulative IO count 00:07:54.209 6654.425 - 6704.837: 0.0087% ( 1) 00:07:54.209 6704.837 - 6755.249: 0.0608% ( 6) 00:07:54.209 6755.249 - 6805.662: 0.0781% ( 2) 00:07:54.209 6805.662 - 6856.074: 0.1128% ( 4) 00:07:54.209 6856.074 - 6906.486: 0.1389% ( 3) 00:07:54.209 6906.486 - 6956.898: 0.1736% ( 4) 00:07:54.209 6956.898 - 7007.311: 0.2083% ( 4) 00:07:54.209 7007.311 - 7057.723: 0.2344% ( 3) 00:07:54.209 7057.723 - 7108.135: 0.2691% ( 4) 00:07:54.209 7108.135 - 7158.548: 0.3038% ( 4) 00:07:54.209 7158.548 - 7208.960: 0.3299% ( 3) 00:07:54.209 7208.960 - 7259.372: 0.3559% ( 3) 00:07:54.209 7259.372 - 7309.785: 0.3906% ( 4) 00:07:54.209 7309.785 - 7360.197: 0.4253% ( 4) 00:07:54.209 7360.197 - 7410.609: 0.4601% ( 4) 00:07:54.209 7410.609 - 7461.022: 0.4774% ( 2) 00:07:54.209 7461.022 - 7511.434: 0.5122% ( 4) 00:07:54.209 7511.434 - 7561.846: 0.5469% ( 4) 00:07:54.209 7561.846 - 7612.258: 0.5556% ( 1) 00:07:54.209 7763.495 - 7813.908: 0.5816% ( 3) 00:07:54.209 7813.908 - 7864.320: 0.6076% ( 3) 00:07:54.209 7864.320 - 7914.732: 0.6337% ( 3) 00:07:54.209 7914.732 - 7965.145: 0.6771% ( 5) 00:07:54.209 7965.145 - 8015.557: 0.7031% ( 3) 00:07:54.209 8015.557 - 8065.969: 0.7639% ( 7) 00:07:54.209 8065.969 - 8116.382: 0.8247% ( 7) 00:07:54.209 8116.382 - 8166.794: 0.9028% ( 9) 00:07:54.209 8166.794 - 8217.206: 1.0243% ( 14) 00:07:54.209 8217.206 - 8267.618: 1.1198% ( 11) 00:07:54.209 8267.618 - 8318.031: 1.2066% ( 10) 00:07:54.209 8318.031 - 8368.443: 1.3368% ( 15) 00:07:54.209 8368.443 - 8418.855: 1.4583% ( 14) 00:07:54.209 8418.855 - 8469.268: 1.5712% ( 13) 00:07:54.209 8469.268 - 8519.680: 1.7708% ( 23) 00:07:54.209 8519.680 - 8570.092: 2.0226% ( 29) 00:07:54.210 8570.092 - 8620.505: 2.2917% ( 31) 00:07:54.210 8620.505 - 8670.917: 2.5434% ( 29) 00:07:54.210 8670.917 - 8721.329: 2.9340% ( 45) 00:07:54.210 8721.329 - 8771.742: 3.2639% ( 38) 00:07:54.210 8771.742 - 8822.154: 3.6545% ( 45) 00:07:54.210 8822.154 - 8872.566: 4.1059% ( 52) 00:07:54.210 8872.566 - 8922.978: 4.6007% ( 57) 00:07:54.210 8922.978 - 8973.391: 5.2344% ( 73) 00:07:54.210 8973.391 - 9023.803: 6.0677% ( 96) 00:07:54.210 9023.803 - 9074.215: 6.9705% ( 104) 00:07:54.210 9074.215 - 9124.628: 7.8125% ( 97) 00:07:54.210 9124.628 - 9175.040: 8.8802% ( 123) 00:07:54.210 9175.040 - 9225.452: 10.1910% ( 151) 00:07:54.210 9225.452 - 9275.865: 11.5885% ( 161) 00:07:54.210 9275.865 - 9326.277: 13.0469% ( 168) 00:07:54.210 9326.277 - 9376.689: 14.6007% ( 179) 00:07:54.210 9376.689 - 9427.102: 16.2326% ( 188) 00:07:54.210 9427.102 - 9477.514: 17.9774% ( 201) 00:07:54.210 9477.514 - 9527.926: 19.8872% ( 220) 00:07:54.210 9527.926 - 9578.338: 21.9618% ( 239) 00:07:54.210 9578.338 - 9628.751: 23.9323% ( 227) 00:07:54.210 9628.751 - 9679.163: 26.1111% ( 251) 00:07:54.210 9679.163 - 9729.575: 28.1684% ( 237) 00:07:54.210 9729.575 - 9779.988: 30.3906% ( 256) 00:07:54.210 9779.988 - 9830.400: 32.5347% ( 247) 00:07:54.210 9830.400 - 9880.812: 34.7222% ( 252) 00:07:54.210 9880.812 - 9931.225: 37.1007% ( 274) 00:07:54.210 9931.225 - 9981.637: 39.5312% ( 280) 00:07:54.210 9981.637 - 10032.049: 41.9531% ( 279) 00:07:54.210 10032.049 - 10082.462: 44.4010% ( 282) 00:07:54.210 10082.462 - 10132.874: 46.7361% ( 269) 00:07:54.210 10132.874 - 10183.286: 48.8194% ( 240) 00:07:54.210 10183.286 - 10233.698: 51.0069% ( 252) 00:07:54.210 10233.698 - 10284.111: 53.0642% ( 237) 00:07:54.210 10284.111 - 10334.523: 55.1736% ( 243) 00:07:54.210 10334.523 - 10384.935: 56.9010% ( 199) 00:07:54.210 10384.935 - 10435.348: 58.6806% ( 205) 00:07:54.210 10435.348 - 10485.760: 60.2691% ( 183) 00:07:54.210 10485.760 - 10536.172: 61.6319% ( 157) 00:07:54.210 10536.172 - 10586.585: 63.0642% ( 165) 00:07:54.210 10586.585 - 10636.997: 64.4097% ( 155) 00:07:54.210 10636.997 - 10687.409: 65.5816% ( 135) 00:07:54.210 10687.409 - 10737.822: 66.7101% ( 130) 00:07:54.210 10737.822 - 10788.234: 67.6910% ( 113) 00:07:54.210 10788.234 - 10838.646: 68.7153% ( 118) 00:07:54.210 10838.646 - 10889.058: 69.6962% ( 113) 00:07:54.210 10889.058 - 10939.471: 70.5816% ( 102) 00:07:54.210 10939.471 - 10989.883: 71.3194% ( 85) 00:07:54.210 10989.883 - 11040.295: 72.0747% ( 87) 00:07:54.210 11040.295 - 11090.708: 72.5521% ( 55) 00:07:54.210 11090.708 - 11141.120: 72.9861% ( 50) 00:07:54.210 11141.120 - 11191.532: 73.4028% ( 48) 00:07:54.210 11191.532 - 11241.945: 73.7413% ( 39) 00:07:54.210 11241.945 - 11292.357: 74.1840% ( 51) 00:07:54.210 11292.357 - 11342.769: 74.5486% ( 42) 00:07:54.210 11342.769 - 11393.182: 74.9306% ( 44) 00:07:54.210 11393.182 - 11443.594: 75.2865% ( 41) 00:07:54.210 11443.594 - 11494.006: 75.6076% ( 37) 00:07:54.210 11494.006 - 11544.418: 75.9809% ( 43) 00:07:54.210 11544.418 - 11594.831: 76.3889% ( 47) 00:07:54.210 11594.831 - 11645.243: 76.7188% ( 38) 00:07:54.210 11645.243 - 11695.655: 77.0747% ( 41) 00:07:54.210 11695.655 - 11746.068: 77.3351% ( 30) 00:07:54.210 11746.068 - 11796.480: 77.6215% ( 33) 00:07:54.210 11796.480 - 11846.892: 77.9080% ( 33) 00:07:54.210 11846.892 - 11897.305: 78.1771% ( 31) 00:07:54.210 11897.305 - 11947.717: 78.4288% ( 29) 00:07:54.210 11947.717 - 11998.129: 78.6545% ( 26) 00:07:54.210 11998.129 - 12048.542: 78.9497% ( 34) 00:07:54.210 12048.542 - 12098.954: 79.1059% ( 18) 00:07:54.210 12098.954 - 12149.366: 79.3229% ( 25) 00:07:54.210 12149.366 - 12199.778: 79.5660% ( 28) 00:07:54.210 12199.778 - 12250.191: 79.7743% ( 24) 00:07:54.210 12250.191 - 12300.603: 79.9306% ( 18) 00:07:54.210 12300.603 - 12351.015: 80.1215% ( 22) 00:07:54.210 12351.015 - 12401.428: 80.3212% ( 23) 00:07:54.210 12401.428 - 12451.840: 80.5035% ( 21) 00:07:54.210 12451.840 - 12502.252: 80.6684% ( 19) 00:07:54.210 12502.252 - 12552.665: 80.8767% ( 24) 00:07:54.210 12552.665 - 12603.077: 81.0156% ( 16) 00:07:54.210 12603.077 - 12653.489: 81.1719% ( 18) 00:07:54.210 12653.489 - 12703.902: 81.3281% ( 18) 00:07:54.210 12703.902 - 12754.314: 81.5365% ( 24) 00:07:54.210 12754.314 - 12804.726: 81.6927% ( 18) 00:07:54.210 12804.726 - 12855.138: 81.8576% ( 19) 00:07:54.210 12855.138 - 12905.551: 82.0139% ( 18) 00:07:54.210 12905.551 - 13006.375: 82.3611% ( 40) 00:07:54.210 13006.375 - 13107.200: 82.6128% ( 29) 00:07:54.210 13107.200 - 13208.025: 82.9167% ( 35) 00:07:54.210 13208.025 - 13308.849: 83.1944% ( 32) 00:07:54.210 13308.849 - 13409.674: 83.4375% ( 28) 00:07:54.210 13409.674 - 13510.498: 83.8108% ( 43) 00:07:54.210 13510.498 - 13611.323: 84.1840% ( 43) 00:07:54.210 13611.323 - 13712.148: 84.4358% ( 29) 00:07:54.210 13712.148 - 13812.972: 84.8090% ( 43) 00:07:54.210 13812.972 - 13913.797: 85.1910% ( 44) 00:07:54.210 13913.797 - 14014.622: 85.5729% ( 44) 00:07:54.210 14014.622 - 14115.446: 85.9809% ( 47) 00:07:54.210 14115.446 - 14216.271: 86.2847% ( 35) 00:07:54.210 14216.271 - 14317.095: 86.7361% ( 52) 00:07:54.210 14317.095 - 14417.920: 87.1181% ( 44) 00:07:54.210 14417.920 - 14518.745: 87.4740% ( 41) 00:07:54.210 14518.745 - 14619.569: 87.8472% ( 43) 00:07:54.210 14619.569 - 14720.394: 88.2378% ( 45) 00:07:54.210 14720.394 - 14821.218: 88.5764% ( 39) 00:07:54.210 14821.218 - 14922.043: 88.9497% ( 43) 00:07:54.210 14922.043 - 15022.868: 89.2795% ( 38) 00:07:54.210 15022.868 - 15123.692: 89.5833% ( 35) 00:07:54.210 15123.692 - 15224.517: 89.8872% ( 35) 00:07:54.210 15224.517 - 15325.342: 90.2431% ( 41) 00:07:54.210 15325.342 - 15426.166: 90.4774% ( 27) 00:07:54.210 15426.166 - 15526.991: 90.6597% ( 21) 00:07:54.210 15526.991 - 15627.815: 90.9983% ( 39) 00:07:54.210 15627.815 - 15728.640: 91.2413% ( 28) 00:07:54.210 15728.640 - 15829.465: 91.6059% ( 42) 00:07:54.210 15829.465 - 15930.289: 91.8576% ( 29) 00:07:54.210 15930.289 - 16031.114: 92.2483% ( 45) 00:07:54.210 16031.114 - 16131.938: 92.5868% ( 39) 00:07:54.210 16131.938 - 16232.763: 92.9080% ( 37) 00:07:54.210 16232.763 - 16333.588: 93.2812% ( 43) 00:07:54.210 16333.588 - 16434.412: 93.6545% ( 43) 00:07:54.210 16434.412 - 16535.237: 94.0799% ( 49) 00:07:54.210 16535.237 - 16636.062: 94.5139% ( 50) 00:07:54.210 16636.062 - 16736.886: 95.0521% ( 62) 00:07:54.210 16736.886 - 16837.711: 95.5035% ( 52) 00:07:54.210 16837.711 - 16938.535: 95.9635% ( 53) 00:07:54.210 16938.535 - 17039.360: 96.3976% ( 50) 00:07:54.210 17039.360 - 17140.185: 96.8142% ( 48) 00:07:54.210 17140.185 - 17241.009: 97.1094% ( 34) 00:07:54.210 17241.009 - 17341.834: 97.3872% ( 32) 00:07:54.210 17341.834 - 17442.658: 97.6562% ( 31) 00:07:54.210 17442.658 - 17543.483: 97.9427% ( 33) 00:07:54.210 17543.483 - 17644.308: 98.1510% ( 24) 00:07:54.210 17644.308 - 17745.132: 98.2639% ( 13) 00:07:54.210 17745.132 - 17845.957: 98.3333% ( 8) 00:07:54.210 17845.957 - 17946.782: 98.3854% ( 6) 00:07:54.210 17946.782 - 18047.606: 98.4201% ( 4) 00:07:54.210 18047.606 - 18148.431: 98.4549% ( 4) 00:07:54.210 18148.431 - 18249.255: 98.4722% ( 2) 00:07:54.210 18249.255 - 18350.080: 98.4896% ( 2) 00:07:54.210 18350.080 - 18450.905: 98.5330% ( 5) 00:07:54.210 18450.905 - 18551.729: 98.5503% ( 2) 00:07:54.210 18551.729 - 18652.554: 98.5764% ( 3) 00:07:54.210 18652.554 - 18753.378: 98.6024% ( 3) 00:07:54.210 18753.378 - 18854.203: 98.6285% ( 3) 00:07:54.210 18854.203 - 18955.028: 98.6545% ( 3) 00:07:54.210 18955.028 - 19055.852: 98.6806% ( 3) 00:07:54.210 19055.852 - 19156.677: 98.7240% ( 5) 00:07:54.210 19156.677 - 19257.502: 98.8021% ( 9) 00:07:54.210 19257.502 - 19358.326: 98.8802% ( 9) 00:07:54.210 19358.326 - 19459.151: 98.9497% ( 8) 00:07:54.210 19459.151 - 19559.975: 99.0278% ( 9) 00:07:54.210 19559.975 - 19660.800: 99.1146% ( 10) 00:07:54.210 19660.800 - 19761.625: 99.1840% ( 8) 00:07:54.210 19761.625 - 19862.449: 99.2708% ( 10) 00:07:54.210 19862.449 - 19963.274: 99.3229% ( 6) 00:07:54.210 19963.274 - 20064.098: 99.3663% ( 5) 00:07:54.210 20064.098 - 20164.923: 99.4097% ( 5) 00:07:54.210 20164.923 - 20265.748: 99.4444% ( 4) 00:07:54.210 22685.538 - 22786.363: 99.4878% ( 5) 00:07:54.210 22786.363 - 22887.188: 99.5139% ( 3) 00:07:54.210 22887.188 - 22988.012: 99.5573% ( 5) 00:07:54.210 22988.012 - 23088.837: 99.5920% ( 4) 00:07:54.210 23088.837 - 23189.662: 99.6354% ( 5) 00:07:54.210 23189.662 - 23290.486: 99.6615% ( 3) 00:07:54.210 23290.486 - 23391.311: 99.7049% ( 5) 00:07:54.210 23391.311 - 23492.135: 99.7483% ( 5) 00:07:54.210 23492.135 - 23592.960: 99.7830% ( 4) 00:07:54.210 23592.960 - 23693.785: 99.8264% ( 5) 00:07:54.210 23693.785 - 23794.609: 99.8698% ( 5) 00:07:54.210 23794.609 - 23895.434: 99.9045% ( 4) 00:07:54.210 23895.434 - 23996.258: 99.9479% ( 5) 00:07:54.210 23996.258 - 24097.083: 99.9826% ( 4) 00:07:54.210 24097.083 - 24197.908: 100.0000% ( 2) 00:07:54.210 00:07:54.210 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.210 ============================================================================== 00:07:54.210 Range in us Cumulative IO count 00:07:54.210 6251.126 - 6276.332: 0.0260% ( 3) 00:07:54.210 6276.332 - 6301.538: 0.0694% ( 5) 00:07:54.210 6301.538 - 6326.745: 0.1215% ( 6) 00:07:54.210 6326.745 - 6351.951: 0.1389% ( 2) 00:07:54.210 6351.951 - 6377.157: 0.1562% ( 2) 00:07:54.210 6377.157 - 6402.363: 0.1649% ( 1) 00:07:54.210 6402.363 - 6427.569: 0.1736% ( 1) 00:07:54.210 6427.569 - 6452.775: 0.1823% ( 1) 00:07:54.210 6452.775 - 6503.188: 0.2257% ( 5) 00:07:54.210 6503.188 - 6553.600: 0.2517% ( 3) 00:07:54.210 6553.600 - 6604.012: 0.2951% ( 5) 00:07:54.210 6604.012 - 6654.425: 0.3299% ( 4) 00:07:54.210 6654.425 - 6704.837: 0.3646% ( 4) 00:07:54.210 6704.837 - 6755.249: 0.4080% ( 5) 00:07:54.210 6755.249 - 6805.662: 0.4427% ( 4) 00:07:54.211 6805.662 - 6856.074: 0.4774% ( 4) 00:07:54.211 6856.074 - 6906.486: 0.5122% ( 4) 00:07:54.211 6906.486 - 6956.898: 0.5469% ( 4) 00:07:54.211 6956.898 - 7007.311: 0.5556% ( 1) 00:07:54.211 7713.083 - 7763.495: 0.5729% ( 2) 00:07:54.211 7763.495 - 7813.908: 0.6076% ( 4) 00:07:54.211 7813.908 - 7864.320: 0.6250% ( 2) 00:07:54.211 7864.320 - 7914.732: 0.6510% ( 3) 00:07:54.211 7914.732 - 7965.145: 0.6771% ( 3) 00:07:54.211 7965.145 - 8015.557: 0.7031% ( 3) 00:07:54.211 8015.557 - 8065.969: 0.7292% ( 3) 00:07:54.211 8065.969 - 8116.382: 0.7639% ( 4) 00:07:54.211 8116.382 - 8166.794: 0.8854% ( 14) 00:07:54.211 8166.794 - 8217.206: 1.0156% ( 15) 00:07:54.211 8217.206 - 8267.618: 1.1111% ( 11) 00:07:54.211 8267.618 - 8318.031: 1.2413% ( 15) 00:07:54.211 8318.031 - 8368.443: 1.3802% ( 16) 00:07:54.211 8368.443 - 8418.855: 1.5017% ( 14) 00:07:54.211 8418.855 - 8469.268: 1.6580% ( 18) 00:07:54.211 8469.268 - 8519.680: 1.7969% ( 16) 00:07:54.211 8519.680 - 8570.092: 1.9358% ( 16) 00:07:54.211 8570.092 - 8620.505: 2.1701% ( 27) 00:07:54.211 8620.505 - 8670.917: 2.4219% ( 29) 00:07:54.211 8670.917 - 8721.329: 2.7778% ( 41) 00:07:54.211 8721.329 - 8771.742: 3.1337% ( 41) 00:07:54.211 8771.742 - 8822.154: 3.4462% ( 36) 00:07:54.211 8822.154 - 8872.566: 3.8281% ( 44) 00:07:54.211 8872.566 - 8922.978: 4.2969% ( 54) 00:07:54.211 8922.978 - 8973.391: 4.8524% ( 64) 00:07:54.211 8973.391 - 9023.803: 5.5556% ( 81) 00:07:54.211 9023.803 - 9074.215: 6.2847% ( 84) 00:07:54.211 9074.215 - 9124.628: 7.2309% ( 109) 00:07:54.211 9124.628 - 9175.040: 8.2292% ( 115) 00:07:54.211 9175.040 - 9225.452: 9.1580% ( 107) 00:07:54.211 9225.452 - 9275.865: 10.2431% ( 125) 00:07:54.211 9275.865 - 9326.277: 11.5104% ( 146) 00:07:54.211 9326.277 - 9376.689: 12.9861% ( 170) 00:07:54.211 9376.689 - 9427.102: 14.8264% ( 212) 00:07:54.211 9427.102 - 9477.514: 16.5451% ( 198) 00:07:54.211 9477.514 - 9527.926: 18.3247% ( 205) 00:07:54.211 9527.926 - 9578.338: 20.4340% ( 243) 00:07:54.211 9578.338 - 9628.751: 22.6476% ( 255) 00:07:54.211 9628.751 - 9679.163: 24.8264% ( 251) 00:07:54.211 9679.163 - 9729.575: 27.1528% ( 268) 00:07:54.211 9729.575 - 9779.988: 29.5486% ( 276) 00:07:54.211 9779.988 - 9830.400: 32.1007% ( 294) 00:07:54.211 9830.400 - 9880.812: 34.7049% ( 300) 00:07:54.211 9880.812 - 9931.225: 37.3438% ( 304) 00:07:54.211 9931.225 - 9981.637: 39.9740% ( 303) 00:07:54.211 9981.637 - 10032.049: 42.3872% ( 278) 00:07:54.211 10032.049 - 10082.462: 44.8177% ( 280) 00:07:54.211 10082.462 - 10132.874: 47.3524% ( 292) 00:07:54.211 10132.874 - 10183.286: 49.7396% ( 275) 00:07:54.211 10183.286 - 10233.698: 52.1528% ( 278) 00:07:54.211 10233.698 - 10284.111: 54.3750% ( 256) 00:07:54.211 10284.111 - 10334.523: 56.4670% ( 241) 00:07:54.211 10334.523 - 10384.935: 58.3767% ( 220) 00:07:54.211 10384.935 - 10435.348: 60.1302% ( 202) 00:07:54.211 10435.348 - 10485.760: 61.8750% ( 201) 00:07:54.211 10485.760 - 10536.172: 63.4896% ( 186) 00:07:54.211 10536.172 - 10586.585: 65.0174% ( 176) 00:07:54.211 10586.585 - 10636.997: 66.3542% ( 154) 00:07:54.211 10636.997 - 10687.409: 67.5521% ( 138) 00:07:54.211 10687.409 - 10737.822: 68.7153% ( 134) 00:07:54.211 10737.822 - 10788.234: 69.7569% ( 120) 00:07:54.211 10788.234 - 10838.646: 70.6337% ( 101) 00:07:54.211 10838.646 - 10889.058: 71.4757% ( 97) 00:07:54.211 10889.058 - 10939.471: 72.1615% ( 79) 00:07:54.211 10939.471 - 10989.883: 72.7604% ( 69) 00:07:54.211 10989.883 - 11040.295: 73.3854% ( 72) 00:07:54.211 11040.295 - 11090.708: 73.8628% ( 55) 00:07:54.211 11090.708 - 11141.120: 74.3316% ( 54) 00:07:54.211 11141.120 - 11191.532: 74.7222% ( 45) 00:07:54.211 11191.532 - 11241.945: 75.1128% ( 45) 00:07:54.211 11241.945 - 11292.357: 75.5122% ( 46) 00:07:54.211 11292.357 - 11342.769: 75.8507% ( 39) 00:07:54.211 11342.769 - 11393.182: 76.1285% ( 32) 00:07:54.211 11393.182 - 11443.594: 76.3368% ( 24) 00:07:54.211 11443.594 - 11494.006: 76.5625% ( 26) 00:07:54.211 11494.006 - 11544.418: 76.7361% ( 20) 00:07:54.211 11544.418 - 11594.831: 76.9010% ( 19) 00:07:54.211 11594.831 - 11645.243: 77.0486% ( 17) 00:07:54.211 11645.243 - 11695.655: 77.2309% ( 21) 00:07:54.211 11695.655 - 11746.068: 77.3264% ( 11) 00:07:54.211 11746.068 - 11796.480: 77.4219% ( 11) 00:07:54.211 11796.480 - 11846.892: 77.5347% ( 13) 00:07:54.211 11846.892 - 11897.305: 77.6302% ( 11) 00:07:54.211 11897.305 - 11947.717: 77.7344% ( 12) 00:07:54.211 11947.717 - 11998.129: 77.8212% ( 10) 00:07:54.211 11998.129 - 12048.542: 77.9253% ( 12) 00:07:54.211 12048.542 - 12098.954: 78.0035% ( 9) 00:07:54.211 12098.954 - 12149.366: 78.1250% ( 14) 00:07:54.211 12149.366 - 12199.778: 78.2292% ( 12) 00:07:54.211 12199.778 - 12250.191: 78.3594% ( 15) 00:07:54.211 12250.191 - 12300.603: 78.4722% ( 13) 00:07:54.211 12300.603 - 12351.015: 78.5764% ( 12) 00:07:54.211 12351.015 - 12401.428: 78.7674% ( 22) 00:07:54.211 12401.428 - 12451.840: 78.9670% ( 23) 00:07:54.211 12451.840 - 12502.252: 79.1580% ( 22) 00:07:54.211 12502.252 - 12552.665: 79.3924% ( 27) 00:07:54.211 12552.665 - 12603.077: 79.6181% ( 26) 00:07:54.211 12603.077 - 12653.489: 79.8524% ( 27) 00:07:54.211 12653.489 - 12703.902: 80.0955% ( 28) 00:07:54.211 12703.902 - 12754.314: 80.3385% ( 28) 00:07:54.211 12754.314 - 12804.726: 80.6858% ( 40) 00:07:54.211 12804.726 - 12855.138: 80.9896% ( 35) 00:07:54.211 12855.138 - 12905.551: 81.2326% ( 28) 00:07:54.211 12905.551 - 13006.375: 81.7535% ( 60) 00:07:54.211 13006.375 - 13107.200: 82.2309% ( 55) 00:07:54.211 13107.200 - 13208.025: 82.6997% ( 54) 00:07:54.211 13208.025 - 13308.849: 83.1597% ( 53) 00:07:54.211 13308.849 - 13409.674: 83.6198% ( 53) 00:07:54.211 13409.674 - 13510.498: 83.9497% ( 38) 00:07:54.211 13510.498 - 13611.323: 84.2969% ( 40) 00:07:54.211 13611.323 - 13712.148: 84.6094% ( 36) 00:07:54.211 13712.148 - 13812.972: 84.9826% ( 43) 00:07:54.211 13812.972 - 13913.797: 85.3646% ( 44) 00:07:54.211 13913.797 - 14014.622: 85.7292% ( 42) 00:07:54.211 14014.622 - 14115.446: 86.1806% ( 52) 00:07:54.211 14115.446 - 14216.271: 86.6753% ( 57) 00:07:54.211 14216.271 - 14317.095: 87.1701% ( 57) 00:07:54.211 14317.095 - 14417.920: 87.5608% ( 45) 00:07:54.211 14417.920 - 14518.745: 87.9080% ( 40) 00:07:54.211 14518.745 - 14619.569: 88.2812% ( 43) 00:07:54.211 14619.569 - 14720.394: 88.6458% ( 42) 00:07:54.211 14720.394 - 14821.218: 88.9757% ( 38) 00:07:54.211 14821.218 - 14922.043: 89.2882% ( 36) 00:07:54.211 14922.043 - 15022.868: 89.5486% ( 30) 00:07:54.211 15022.868 - 15123.692: 89.8264% ( 32) 00:07:54.211 15123.692 - 15224.517: 90.0868% ( 30) 00:07:54.211 15224.517 - 15325.342: 90.3385% ( 29) 00:07:54.211 15325.342 - 15426.166: 90.5903% ( 29) 00:07:54.211 15426.166 - 15526.991: 90.8420% ( 29) 00:07:54.211 15526.991 - 15627.815: 91.1979% ( 41) 00:07:54.211 15627.815 - 15728.640: 91.5712% ( 43) 00:07:54.211 15728.640 - 15829.465: 91.9097% ( 39) 00:07:54.211 15829.465 - 15930.289: 92.2222% ( 36) 00:07:54.211 15930.289 - 16031.114: 92.5521% ( 38) 00:07:54.211 16031.114 - 16131.938: 92.9514% ( 46) 00:07:54.211 16131.938 - 16232.763: 93.3160% ( 42) 00:07:54.211 16232.763 - 16333.588: 93.6545% ( 39) 00:07:54.211 16333.588 - 16434.412: 94.0712% ( 48) 00:07:54.211 16434.412 - 16535.237: 94.4878% ( 48) 00:07:54.211 16535.237 - 16636.062: 94.8698% ( 44) 00:07:54.211 16636.062 - 16736.886: 95.2604% ( 45) 00:07:54.211 16736.886 - 16837.711: 95.6858% ( 49) 00:07:54.211 16837.711 - 16938.535: 96.1111% ( 49) 00:07:54.211 16938.535 - 17039.360: 96.5278% ( 48) 00:07:54.211 17039.360 - 17140.185: 96.8750% ( 40) 00:07:54.211 17140.185 - 17241.009: 97.2049% ( 38) 00:07:54.211 17241.009 - 17341.834: 97.5000% ( 34) 00:07:54.211 17341.834 - 17442.658: 97.7517% ( 29) 00:07:54.211 17442.658 - 17543.483: 97.9601% ( 24) 00:07:54.211 17543.483 - 17644.308: 98.0903% ( 15) 00:07:54.211 17644.308 - 17745.132: 98.1944% ( 12) 00:07:54.211 17745.132 - 17845.957: 98.2726% ( 9) 00:07:54.211 17845.957 - 17946.782: 98.3247% ( 6) 00:07:54.211 17946.782 - 18047.606: 98.3333% ( 1) 00:07:54.211 18854.203 - 18955.028: 98.3507% ( 2) 00:07:54.211 18955.028 - 19055.852: 98.3767% ( 3) 00:07:54.211 19055.852 - 19156.677: 98.4115% ( 4) 00:07:54.211 19156.677 - 19257.502: 98.4375% ( 3) 00:07:54.211 19257.502 - 19358.326: 98.4635% ( 3) 00:07:54.211 19358.326 - 19459.151: 98.4983% ( 4) 00:07:54.211 19459.151 - 19559.975: 98.5764% ( 9) 00:07:54.211 19559.975 - 19660.800: 98.6545% ( 9) 00:07:54.211 19660.800 - 19761.625: 98.7587% ( 12) 00:07:54.211 19761.625 - 19862.449: 98.8455% ( 10) 00:07:54.211 19862.449 - 19963.274: 98.9410% ( 11) 00:07:54.211 19963.274 - 20064.098: 99.0278% ( 10) 00:07:54.211 20064.098 - 20164.923: 99.1233% ( 11) 00:07:54.211 20164.923 - 20265.748: 99.2101% ( 10) 00:07:54.211 20265.748 - 20366.572: 99.2969% ( 10) 00:07:54.211 20366.572 - 20467.397: 99.3490% ( 6) 00:07:54.211 20467.397 - 20568.222: 99.3750% ( 3) 00:07:54.211 20568.222 - 20669.046: 99.4097% ( 4) 00:07:54.211 20669.046 - 20769.871: 99.4358% ( 3) 00:07:54.211 20769.871 - 20870.695: 99.4444% ( 1) 00:07:54.211 22483.889 - 22584.714: 99.4792% ( 4) 00:07:54.211 22584.714 - 22685.538: 99.5226% ( 5) 00:07:54.211 22685.538 - 22786.363: 99.5747% ( 6) 00:07:54.211 22786.363 - 22887.188: 99.6181% ( 5) 00:07:54.211 22887.188 - 22988.012: 99.6615% ( 5) 00:07:54.211 22988.012 - 23088.837: 99.6962% ( 4) 00:07:54.211 23088.837 - 23189.662: 99.7396% ( 5) 00:07:54.211 23189.662 - 23290.486: 99.7743% ( 4) 00:07:54.211 23290.486 - 23391.311: 99.8177% ( 5) 00:07:54.211 23391.311 - 23492.135: 99.8698% ( 6) 00:07:54.211 23492.135 - 23592.960: 99.9132% ( 5) 00:07:54.211 23592.960 - 23693.785: 99.9653% ( 6) 00:07:54.211 23693.785 - 23794.609: 100.0000% ( 4) 00:07:54.211 00:07:54.211 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.211 ============================================================================== 00:07:54.212 Range in us Cumulative IO count 00:07:54.212 5343.705 - 5368.911: 0.0087% ( 1) 00:07:54.212 5368.911 - 5394.117: 0.0260% ( 2) 00:07:54.212 5394.117 - 5419.323: 0.0434% ( 2) 00:07:54.212 5419.323 - 5444.529: 0.0608% ( 2) 00:07:54.212 5444.529 - 5469.735: 0.0781% ( 2) 00:07:54.212 5469.735 - 5494.942: 0.0955% ( 2) 00:07:54.212 5494.942 - 5520.148: 0.1128% ( 2) 00:07:54.212 5520.148 - 5545.354: 0.1302% ( 2) 00:07:54.212 5545.354 - 5570.560: 0.1476% ( 2) 00:07:54.212 5570.560 - 5595.766: 0.1736% ( 3) 00:07:54.212 5595.766 - 5620.972: 0.1910% ( 2) 00:07:54.212 5620.972 - 5646.178: 0.2083% ( 2) 00:07:54.212 5646.178 - 5671.385: 0.2257% ( 2) 00:07:54.212 5671.385 - 5696.591: 0.2517% ( 3) 00:07:54.212 5696.591 - 5721.797: 0.2691% ( 2) 00:07:54.212 5721.797 - 5747.003: 0.2865% ( 2) 00:07:54.212 5747.003 - 5772.209: 0.3038% ( 2) 00:07:54.212 5772.209 - 5797.415: 0.3212% ( 2) 00:07:54.212 5797.415 - 5822.622: 0.3385% ( 2) 00:07:54.212 5822.622 - 5847.828: 0.3559% ( 2) 00:07:54.212 5847.828 - 5873.034: 0.3819% ( 3) 00:07:54.212 5873.034 - 5898.240: 0.3993% ( 2) 00:07:54.212 5898.240 - 5923.446: 0.4167% ( 2) 00:07:54.212 5923.446 - 5948.652: 0.4340% ( 2) 00:07:54.212 5948.652 - 5973.858: 0.4514% ( 2) 00:07:54.212 5973.858 - 5999.065: 0.4688% ( 2) 00:07:54.212 5999.065 - 6024.271: 0.4861% ( 2) 00:07:54.212 6024.271 - 6049.477: 0.5122% ( 3) 00:07:54.212 6049.477 - 6074.683: 0.5295% ( 2) 00:07:54.212 6074.683 - 6099.889: 0.5469% ( 2) 00:07:54.212 6099.889 - 6125.095: 0.5556% ( 1) 00:07:54.212 7965.145 - 8015.557: 0.5990% ( 5) 00:07:54.212 8015.557 - 8065.969: 0.6250% ( 3) 00:07:54.212 8065.969 - 8116.382: 0.6858% ( 7) 00:07:54.212 8116.382 - 8166.794: 0.7378% ( 6) 00:07:54.212 8166.794 - 8217.206: 0.8333% ( 11) 00:07:54.212 8217.206 - 8267.618: 0.9462% ( 13) 00:07:54.212 8267.618 - 8318.031: 1.0677% ( 14) 00:07:54.212 8318.031 - 8368.443: 1.2066% ( 16) 00:07:54.212 8368.443 - 8418.855: 1.3802% ( 20) 00:07:54.212 8418.855 - 8469.268: 1.5972% ( 25) 00:07:54.212 8469.268 - 8519.680: 1.7882% ( 22) 00:07:54.212 8519.680 - 8570.092: 2.0052% ( 25) 00:07:54.212 8570.092 - 8620.505: 2.2830% ( 32) 00:07:54.212 8620.505 - 8670.917: 2.5521% ( 31) 00:07:54.212 8670.917 - 8721.329: 2.8385% ( 33) 00:07:54.212 8721.329 - 8771.742: 3.1424% ( 35) 00:07:54.212 8771.742 - 8822.154: 3.4809% ( 39) 00:07:54.212 8822.154 - 8872.566: 3.8715% ( 45) 00:07:54.212 8872.566 - 8922.978: 4.3663% ( 57) 00:07:54.212 8922.978 - 8973.391: 4.8785% ( 59) 00:07:54.212 8973.391 - 9023.803: 5.6337% ( 87) 00:07:54.212 9023.803 - 9074.215: 6.5278% ( 103) 00:07:54.212 9074.215 - 9124.628: 7.5000% ( 112) 00:07:54.212 9124.628 - 9175.040: 8.4983% ( 115) 00:07:54.212 9175.040 - 9225.452: 9.5920% ( 126) 00:07:54.212 9225.452 - 9275.865: 10.9462% ( 156) 00:07:54.212 9275.865 - 9326.277: 12.5260% ( 182) 00:07:54.212 9326.277 - 9376.689: 14.1580% ( 188) 00:07:54.212 9376.689 - 9427.102: 15.6858% ( 176) 00:07:54.212 9427.102 - 9477.514: 17.4913% ( 208) 00:07:54.212 9477.514 - 9527.926: 19.3576% ( 215) 00:07:54.212 9527.926 - 9578.338: 21.1979% ( 212) 00:07:54.212 9578.338 - 9628.751: 23.1771% ( 228) 00:07:54.212 9628.751 - 9679.163: 25.3819% ( 254) 00:07:54.212 9679.163 - 9729.575: 27.8733% ( 287) 00:07:54.212 9729.575 - 9779.988: 30.5035% ( 303) 00:07:54.212 9779.988 - 9830.400: 33.0469% ( 293) 00:07:54.212 9830.400 - 9880.812: 35.6597% ( 301) 00:07:54.212 9880.812 - 9931.225: 38.2465% ( 298) 00:07:54.212 9931.225 - 9981.637: 40.7465% ( 288) 00:07:54.212 9981.637 - 10032.049: 43.2812% ( 292) 00:07:54.212 10032.049 - 10082.462: 45.7118% ( 280) 00:07:54.212 10082.462 - 10132.874: 47.9688% ( 260) 00:07:54.212 10132.874 - 10183.286: 50.2778% ( 266) 00:07:54.212 10183.286 - 10233.698: 52.3785% ( 242) 00:07:54.212 10233.698 - 10284.111: 54.5660% ( 252) 00:07:54.212 10284.111 - 10334.523: 56.5625% ( 230) 00:07:54.212 10334.523 - 10384.935: 58.4983% ( 223) 00:07:54.212 10384.935 - 10435.348: 60.2604% ( 203) 00:07:54.212 10435.348 - 10485.760: 61.9965% ( 200) 00:07:54.212 10485.760 - 10536.172: 63.5330% ( 177) 00:07:54.212 10536.172 - 10586.585: 64.9566% ( 164) 00:07:54.212 10586.585 - 10636.997: 66.2760% ( 152) 00:07:54.212 10636.997 - 10687.409: 67.4653% ( 137) 00:07:54.212 10687.409 - 10737.822: 68.5243% ( 122) 00:07:54.212 10737.822 - 10788.234: 69.4792% ( 110) 00:07:54.212 10788.234 - 10838.646: 70.3472% ( 100) 00:07:54.212 10838.646 - 10889.058: 71.1285% ( 90) 00:07:54.212 10889.058 - 10939.471: 71.8403% ( 82) 00:07:54.212 10939.471 - 10989.883: 72.4740% ( 73) 00:07:54.212 10989.883 - 11040.295: 73.0556% ( 67) 00:07:54.212 11040.295 - 11090.708: 73.6024% ( 63) 00:07:54.212 11090.708 - 11141.120: 74.0365% ( 50) 00:07:54.212 11141.120 - 11191.532: 74.5226% ( 56) 00:07:54.212 11191.532 - 11241.945: 74.8958% ( 43) 00:07:54.212 11241.945 - 11292.357: 75.2344% ( 39) 00:07:54.212 11292.357 - 11342.769: 75.5469% ( 36) 00:07:54.212 11342.769 - 11393.182: 75.7899% ( 28) 00:07:54.212 11393.182 - 11443.594: 76.0590% ( 31) 00:07:54.212 11443.594 - 11494.006: 76.3542% ( 34) 00:07:54.212 11494.006 - 11544.418: 76.5972% ( 28) 00:07:54.212 11544.418 - 11594.831: 76.8056% ( 24) 00:07:54.212 11594.831 - 11645.243: 77.0312% ( 26) 00:07:54.212 11645.243 - 11695.655: 77.2396% ( 24) 00:07:54.212 11695.655 - 11746.068: 77.4045% ( 19) 00:07:54.212 11746.068 - 11796.480: 77.5694% ( 19) 00:07:54.212 11796.480 - 11846.892: 77.7951% ( 26) 00:07:54.212 11846.892 - 11897.305: 77.9774% ( 21) 00:07:54.212 11897.305 - 11947.717: 78.1337% ( 18) 00:07:54.212 11947.717 - 11998.129: 78.2899% ( 18) 00:07:54.212 11998.129 - 12048.542: 78.4115% ( 14) 00:07:54.212 12048.542 - 12098.954: 78.5764% ( 19) 00:07:54.212 12098.954 - 12149.366: 78.7500% ( 20) 00:07:54.212 12149.366 - 12199.778: 78.9236% ( 20) 00:07:54.212 12199.778 - 12250.191: 79.0712% ( 17) 00:07:54.212 12250.191 - 12300.603: 79.2274% ( 18) 00:07:54.212 12300.603 - 12351.015: 79.3837% ( 18) 00:07:54.212 12351.015 - 12401.428: 79.5486% ( 19) 00:07:54.212 12401.428 - 12451.840: 79.6962% ( 17) 00:07:54.212 12451.840 - 12502.252: 79.8438% ( 17) 00:07:54.212 12502.252 - 12552.665: 80.0087% ( 19) 00:07:54.212 12552.665 - 12603.077: 80.1823% ( 20) 00:07:54.212 12603.077 - 12653.489: 80.3646% ( 21) 00:07:54.212 12653.489 - 12703.902: 80.5122% ( 17) 00:07:54.212 12703.902 - 12754.314: 80.6684% ( 18) 00:07:54.212 12754.314 - 12804.726: 80.9028% ( 27) 00:07:54.212 12804.726 - 12855.138: 81.1372% ( 27) 00:07:54.212 12855.138 - 12905.551: 81.3628% ( 26) 00:07:54.212 12905.551 - 13006.375: 81.7622% ( 46) 00:07:54.212 13006.375 - 13107.200: 82.1528% ( 45) 00:07:54.212 13107.200 - 13208.025: 82.5434% ( 45) 00:07:54.212 13208.025 - 13308.849: 82.8906% ( 40) 00:07:54.212 13308.849 - 13409.674: 83.2812% ( 45) 00:07:54.212 13409.674 - 13510.498: 83.6198% ( 39) 00:07:54.212 13510.498 - 13611.323: 83.9323% ( 36) 00:07:54.212 13611.323 - 13712.148: 84.2622% ( 38) 00:07:54.212 13712.148 - 13812.972: 84.6528% ( 45) 00:07:54.212 13812.972 - 13913.797: 85.0000% ( 40) 00:07:54.212 13913.797 - 14014.622: 85.2951% ( 34) 00:07:54.212 14014.622 - 14115.446: 85.5903% ( 34) 00:07:54.212 14115.446 - 14216.271: 85.9462% ( 41) 00:07:54.212 14216.271 - 14317.095: 86.3455% ( 46) 00:07:54.212 14317.095 - 14417.920: 86.7188% ( 43) 00:07:54.212 14417.920 - 14518.745: 87.0052% ( 33) 00:07:54.212 14518.745 - 14619.569: 87.2917% ( 33) 00:07:54.212 14619.569 - 14720.394: 87.5521% ( 30) 00:07:54.212 14720.394 - 14821.218: 87.7865% ( 27) 00:07:54.212 14821.218 - 14922.043: 88.1076% ( 37) 00:07:54.212 14922.043 - 15022.868: 88.5069% ( 46) 00:07:54.212 15022.868 - 15123.692: 88.8281% ( 37) 00:07:54.212 15123.692 - 15224.517: 89.2101% ( 44) 00:07:54.212 15224.517 - 15325.342: 89.6615% ( 52) 00:07:54.212 15325.342 - 15426.166: 90.1389% ( 55) 00:07:54.212 15426.166 - 15526.991: 90.6858% ( 63) 00:07:54.212 15526.991 - 15627.815: 91.1545% ( 54) 00:07:54.212 15627.815 - 15728.640: 91.7014% ( 63) 00:07:54.212 15728.640 - 15829.465: 92.2743% ( 66) 00:07:54.212 15829.465 - 15930.289: 92.6736% ( 46) 00:07:54.212 15930.289 - 16031.114: 93.1163% ( 51) 00:07:54.212 16031.114 - 16131.938: 93.5069% ( 45) 00:07:54.212 16131.938 - 16232.763: 93.9236% ( 48) 00:07:54.212 16232.763 - 16333.588: 94.2708% ( 40) 00:07:54.212 16333.588 - 16434.412: 94.6788% ( 47) 00:07:54.212 16434.412 - 16535.237: 95.0608% ( 44) 00:07:54.212 16535.237 - 16636.062: 95.4861% ( 49) 00:07:54.212 16636.062 - 16736.886: 95.7986% ( 36) 00:07:54.212 16736.886 - 16837.711: 96.0764% ( 32) 00:07:54.212 16837.711 - 16938.535: 96.3194% ( 28) 00:07:54.212 16938.535 - 17039.360: 96.5365% ( 25) 00:07:54.212 17039.360 - 17140.185: 96.7708% ( 27) 00:07:54.212 17140.185 - 17241.009: 96.9705% ( 23) 00:07:54.212 17241.009 - 17341.834: 97.1962% ( 26) 00:07:54.212 17341.834 - 17442.658: 97.3872% ( 22) 00:07:54.212 17442.658 - 17543.483: 97.6128% ( 26) 00:07:54.212 17543.483 - 17644.308: 97.7865% ( 20) 00:07:54.212 17644.308 - 17745.132: 97.9340% ( 17) 00:07:54.212 17745.132 - 17845.957: 98.0208% ( 10) 00:07:54.212 17845.957 - 17946.782: 98.1250% ( 12) 00:07:54.212 17946.782 - 18047.606: 98.2118% ( 10) 00:07:54.212 18047.606 - 18148.431: 98.2552% ( 5) 00:07:54.212 18148.431 - 18249.255: 98.2986% ( 5) 00:07:54.212 18249.255 - 18350.080: 98.3333% ( 4) 00:07:54.212 18652.554 - 18753.378: 98.3854% ( 6) 00:07:54.212 18753.378 - 18854.203: 98.4635% ( 9) 00:07:54.212 18854.203 - 18955.028: 98.5330% ( 8) 00:07:54.212 18955.028 - 19055.852: 98.5938% ( 7) 00:07:54.212 19055.852 - 19156.677: 98.6632% ( 8) 00:07:54.212 19156.677 - 19257.502: 98.7500% ( 10) 00:07:54.212 19257.502 - 19358.326: 98.8281% ( 9) 00:07:54.212 19358.326 - 19459.151: 98.9062% ( 9) 00:07:54.212 19459.151 - 19559.975: 98.9931% ( 10) 00:07:54.212 19559.975 - 19660.800: 99.0712% ( 9) 00:07:54.213 19660.800 - 19761.625: 99.1493% ( 9) 00:07:54.213 19761.625 - 19862.449: 99.2101% ( 7) 00:07:54.213 19862.449 - 19963.274: 99.2622% ( 6) 00:07:54.213 19963.274 - 20064.098: 99.2969% ( 4) 00:07:54.213 20064.098 - 20164.923: 99.3316% ( 4) 00:07:54.213 20164.923 - 20265.748: 99.3576% ( 3) 00:07:54.213 20265.748 - 20366.572: 99.3837% ( 3) 00:07:54.213 20366.572 - 20467.397: 99.4184% ( 4) 00:07:54.213 20467.397 - 20568.222: 99.4444% ( 3) 00:07:54.213 22483.889 - 22584.714: 99.4618% ( 2) 00:07:54.213 22584.714 - 22685.538: 99.5052% ( 5) 00:07:54.213 22685.538 - 22786.363: 99.5573% ( 6) 00:07:54.213 22786.363 - 22887.188: 99.5920% ( 4) 00:07:54.213 22887.188 - 22988.012: 99.6441% ( 6) 00:07:54.213 22988.012 - 23088.837: 99.6788% ( 4) 00:07:54.213 23088.837 - 23189.662: 99.7309% ( 6) 00:07:54.213 23189.662 - 23290.486: 99.7656% ( 4) 00:07:54.213 23290.486 - 23391.311: 99.8090% ( 5) 00:07:54.213 23391.311 - 23492.135: 99.8611% ( 6) 00:07:54.213 23492.135 - 23592.960: 99.9045% ( 5) 00:07:54.213 23592.960 - 23693.785: 99.9479% ( 5) 00:07:54.213 23693.785 - 23794.609: 100.0000% ( 6) 00:07:54.213 00:07:54.213 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.213 ============================================================================== 00:07:54.213 Range in us Cumulative IO count 00:07:54.213 4889.994 - 4915.200: 0.0174% ( 2) 00:07:54.213 4915.200 - 4940.406: 0.0347% ( 2) 00:07:54.213 4940.406 - 4965.612: 0.0521% ( 2) 00:07:54.213 4965.612 - 4990.818: 0.0694% ( 2) 00:07:54.213 4990.818 - 5016.025: 0.0868% ( 2) 00:07:54.213 5016.025 - 5041.231: 0.1042% ( 2) 00:07:54.213 5041.231 - 5066.437: 0.1215% ( 2) 00:07:54.213 5066.437 - 5091.643: 0.1476% ( 3) 00:07:54.213 5091.643 - 5116.849: 0.1649% ( 2) 00:07:54.213 5116.849 - 5142.055: 0.1823% ( 2) 00:07:54.213 5142.055 - 5167.262: 0.1997% ( 2) 00:07:54.213 5167.262 - 5192.468: 0.2170% ( 2) 00:07:54.213 5192.468 - 5217.674: 0.2344% ( 2) 00:07:54.213 5217.674 - 5242.880: 0.2517% ( 2) 00:07:54.213 5242.880 - 5268.086: 0.2691% ( 2) 00:07:54.213 5268.086 - 5293.292: 0.2951% ( 3) 00:07:54.213 5293.292 - 5318.498: 0.3125% ( 2) 00:07:54.213 5318.498 - 5343.705: 0.3299% ( 2) 00:07:54.213 5343.705 - 5368.911: 0.3472% ( 2) 00:07:54.213 5368.911 - 5394.117: 0.3646% ( 2) 00:07:54.213 5394.117 - 5419.323: 0.3819% ( 2) 00:07:54.213 5419.323 - 5444.529: 0.3993% ( 2) 00:07:54.213 5444.529 - 5469.735: 0.4253% ( 3) 00:07:54.213 5469.735 - 5494.942: 0.4427% ( 2) 00:07:54.213 5494.942 - 5520.148: 0.4601% ( 2) 00:07:54.213 5520.148 - 5545.354: 0.4774% ( 2) 00:07:54.213 5545.354 - 5570.560: 0.4948% ( 2) 00:07:54.213 5570.560 - 5595.766: 0.5122% ( 2) 00:07:54.213 5595.766 - 5620.972: 0.5382% ( 3) 00:07:54.213 5620.972 - 5646.178: 0.5469% ( 1) 00:07:54.213 5646.178 - 5671.385: 0.5556% ( 1) 00:07:54.213 7965.145 - 8015.557: 0.5729% ( 2) 00:07:54.213 8015.557 - 8065.969: 0.6250% ( 6) 00:07:54.213 8065.969 - 8116.382: 0.6510% ( 3) 00:07:54.213 8116.382 - 8166.794: 0.6858% ( 4) 00:07:54.213 8166.794 - 8217.206: 0.8073% ( 14) 00:07:54.213 8217.206 - 8267.618: 0.9288% ( 14) 00:07:54.213 8267.618 - 8318.031: 1.0677% ( 16) 00:07:54.213 8318.031 - 8368.443: 1.1892% ( 14) 00:07:54.213 8368.443 - 8418.855: 1.3021% ( 13) 00:07:54.213 8418.855 - 8469.268: 1.4844% ( 21) 00:07:54.213 8469.268 - 8519.680: 1.6840% ( 23) 00:07:54.213 8519.680 - 8570.092: 1.8750% ( 22) 00:07:54.213 8570.092 - 8620.505: 2.1701% ( 34) 00:07:54.213 8620.505 - 8670.917: 2.4479% ( 32) 00:07:54.213 8670.917 - 8721.329: 2.8299% ( 44) 00:07:54.213 8721.329 - 8771.742: 3.2639% ( 50) 00:07:54.213 8771.742 - 8822.154: 3.7847% ( 60) 00:07:54.213 8822.154 - 8872.566: 4.3229% ( 62) 00:07:54.213 8872.566 - 8922.978: 4.8698% ( 63) 00:07:54.213 8922.978 - 8973.391: 5.3646% ( 57) 00:07:54.213 8973.391 - 9023.803: 5.9896% ( 72) 00:07:54.213 9023.803 - 9074.215: 6.6406% ( 75) 00:07:54.213 9074.215 - 9124.628: 7.5608% ( 106) 00:07:54.213 9124.628 - 9175.040: 8.6285% ( 123) 00:07:54.213 9175.040 - 9225.452: 9.9132% ( 148) 00:07:54.213 9225.452 - 9275.865: 11.3194% ( 162) 00:07:54.213 9275.865 - 9326.277: 12.7865% ( 169) 00:07:54.213 9326.277 - 9376.689: 14.4531% ( 192) 00:07:54.213 9376.689 - 9427.102: 16.1892% ( 200) 00:07:54.213 9427.102 - 9477.514: 18.1250% ( 223) 00:07:54.213 9477.514 - 9527.926: 19.9913% ( 215) 00:07:54.213 9527.926 - 9578.338: 21.9792% ( 229) 00:07:54.213 9578.338 - 9628.751: 24.0365% ( 237) 00:07:54.213 9628.751 - 9679.163: 26.0590% ( 233) 00:07:54.213 9679.163 - 9729.575: 28.2465% ( 252) 00:07:54.213 9729.575 - 9779.988: 30.5990% ( 271) 00:07:54.213 9779.988 - 9830.400: 32.9601% ( 272) 00:07:54.213 9830.400 - 9880.812: 35.3038% ( 270) 00:07:54.213 9880.812 - 9931.225: 37.7604% ( 283) 00:07:54.213 9931.225 - 9981.637: 40.1389% ( 274) 00:07:54.213 9981.637 - 10032.049: 42.6562% ( 290) 00:07:54.213 10032.049 - 10082.462: 45.0868% ( 280) 00:07:54.213 10082.462 - 10132.874: 47.4045% ( 267) 00:07:54.213 10132.874 - 10183.286: 49.5920% ( 252) 00:07:54.213 10183.286 - 10233.698: 51.6840% ( 241) 00:07:54.213 10233.698 - 10284.111: 53.7587% ( 239) 00:07:54.213 10284.111 - 10334.523: 55.6510% ( 218) 00:07:54.213 10334.523 - 10384.935: 57.4219% ( 204) 00:07:54.213 10384.935 - 10435.348: 59.0538% ( 188) 00:07:54.213 10435.348 - 10485.760: 60.6424% ( 183) 00:07:54.213 10485.760 - 10536.172: 62.1354% ( 172) 00:07:54.213 10536.172 - 10586.585: 63.5851% ( 167) 00:07:54.213 10586.585 - 10636.997: 64.9566% ( 158) 00:07:54.213 10636.997 - 10687.409: 66.1632% ( 139) 00:07:54.213 10687.409 - 10737.822: 67.3872% ( 141) 00:07:54.213 10737.822 - 10788.234: 68.5677% ( 136) 00:07:54.213 10788.234 - 10838.646: 69.6701% ( 127) 00:07:54.213 10838.646 - 10889.058: 70.6597% ( 114) 00:07:54.213 10889.058 - 10939.471: 71.5538% ( 103) 00:07:54.213 10939.471 - 10989.883: 72.4392% ( 102) 00:07:54.213 10989.883 - 11040.295: 73.2465% ( 93) 00:07:54.213 11040.295 - 11090.708: 73.9149% ( 77) 00:07:54.213 11090.708 - 11141.120: 74.5052% ( 68) 00:07:54.213 11141.120 - 11191.532: 75.0174% ( 59) 00:07:54.213 11191.532 - 11241.945: 75.5729% ( 64) 00:07:54.213 11241.945 - 11292.357: 76.0069% ( 50) 00:07:54.213 11292.357 - 11342.769: 76.3889% ( 44) 00:07:54.213 11342.769 - 11393.182: 76.7622% ( 43) 00:07:54.213 11393.182 - 11443.594: 77.1007% ( 39) 00:07:54.213 11443.594 - 11494.006: 77.4045% ( 35) 00:07:54.213 11494.006 - 11544.418: 77.6476% ( 28) 00:07:54.213 11544.418 - 11594.831: 77.9080% ( 30) 00:07:54.213 11594.831 - 11645.243: 78.1944% ( 33) 00:07:54.213 11645.243 - 11695.655: 78.4375% ( 28) 00:07:54.213 11695.655 - 11746.068: 78.6458% ( 24) 00:07:54.213 11746.068 - 11796.480: 78.8542% ( 24) 00:07:54.213 11796.480 - 11846.892: 78.9844% ( 15) 00:07:54.213 11846.892 - 11897.305: 79.0799% ( 11) 00:07:54.213 11897.305 - 11947.717: 79.1753% ( 11) 00:07:54.213 11947.717 - 11998.129: 79.2622% ( 10) 00:07:54.213 11998.129 - 12048.542: 79.3490% ( 10) 00:07:54.213 12048.542 - 12098.954: 79.4444% ( 11) 00:07:54.213 12098.954 - 12149.366: 79.5312% ( 10) 00:07:54.213 12149.366 - 12199.778: 79.6267% ( 11) 00:07:54.213 12199.778 - 12250.191: 79.7309% ( 12) 00:07:54.213 12250.191 - 12300.603: 79.8351% ( 12) 00:07:54.213 12300.603 - 12351.015: 79.9306% ( 11) 00:07:54.213 12351.015 - 12401.428: 80.0955% ( 19) 00:07:54.213 12401.428 - 12451.840: 80.2344% ( 16) 00:07:54.213 12451.840 - 12502.252: 80.3733% ( 16) 00:07:54.213 12502.252 - 12552.665: 80.5295% ( 18) 00:07:54.213 12552.665 - 12603.077: 80.6597% ( 15) 00:07:54.213 12603.077 - 12653.489: 80.8073% ( 17) 00:07:54.213 12653.489 - 12703.902: 80.9288% ( 14) 00:07:54.213 12703.902 - 12754.314: 81.0590% ( 15) 00:07:54.213 12754.314 - 12804.726: 81.1806% ( 14) 00:07:54.213 12804.726 - 12855.138: 81.3108% ( 15) 00:07:54.213 12855.138 - 12905.551: 81.4931% ( 21) 00:07:54.213 12905.551 - 13006.375: 81.8229% ( 38) 00:07:54.213 13006.375 - 13107.200: 82.0833% ( 30) 00:07:54.213 13107.200 - 13208.025: 82.3611% ( 32) 00:07:54.213 13208.025 - 13308.849: 82.6823% ( 37) 00:07:54.213 13308.849 - 13409.674: 82.9688% ( 33) 00:07:54.213 13409.674 - 13510.498: 83.2465% ( 32) 00:07:54.213 13510.498 - 13611.323: 83.5330% ( 33) 00:07:54.213 13611.323 - 13712.148: 83.8108% ( 32) 00:07:54.214 13712.148 - 13812.972: 84.0799% ( 31) 00:07:54.214 13812.972 - 13913.797: 84.4097% ( 38) 00:07:54.214 13913.797 - 14014.622: 84.7830% ( 43) 00:07:54.214 14014.622 - 14115.446: 85.1302% ( 40) 00:07:54.214 14115.446 - 14216.271: 85.5642% ( 50) 00:07:54.214 14216.271 - 14317.095: 85.9896% ( 49) 00:07:54.214 14317.095 - 14417.920: 86.3976% ( 47) 00:07:54.214 14417.920 - 14518.745: 86.7882% ( 45) 00:07:54.214 14518.745 - 14619.569: 87.0660% ( 32) 00:07:54.214 14619.569 - 14720.394: 87.3698% ( 35) 00:07:54.214 14720.394 - 14821.218: 87.7083% ( 39) 00:07:54.214 14821.218 - 14922.043: 88.1076% ( 46) 00:07:54.214 14922.043 - 15022.868: 88.4809% ( 43) 00:07:54.214 15022.868 - 15123.692: 88.8628% ( 44) 00:07:54.214 15123.692 - 15224.517: 89.2622% ( 46) 00:07:54.214 15224.517 - 15325.342: 89.7222% ( 53) 00:07:54.214 15325.342 - 15426.166: 90.2170% ( 57) 00:07:54.214 15426.166 - 15526.991: 90.6684% ( 52) 00:07:54.214 15526.991 - 15627.815: 91.1372% ( 54) 00:07:54.214 15627.815 - 15728.640: 91.6233% ( 56) 00:07:54.214 15728.640 - 15829.465: 92.0660% ( 51) 00:07:54.214 15829.465 - 15930.289: 92.6649% ( 69) 00:07:54.214 15930.289 - 16031.114: 93.2205% ( 64) 00:07:54.214 16031.114 - 16131.938: 93.7674% ( 63) 00:07:54.214 16131.938 - 16232.763: 94.3056% ( 62) 00:07:54.214 16232.763 - 16333.588: 94.7569% ( 52) 00:07:54.214 16333.588 - 16434.412: 95.0434% ( 33) 00:07:54.214 16434.412 - 16535.237: 95.3038% ( 30) 00:07:54.214 16535.237 - 16636.062: 95.5816% ( 32) 00:07:54.214 16636.062 - 16736.886: 95.8507% ( 31) 00:07:54.214 16736.886 - 16837.711: 96.0417% ( 22) 00:07:54.214 16837.711 - 16938.535: 96.2326% ( 22) 00:07:54.214 16938.535 - 17039.360: 96.3368% ( 12) 00:07:54.214 17039.360 - 17140.185: 96.4583% ( 14) 00:07:54.214 17140.185 - 17241.009: 96.6493% ( 22) 00:07:54.214 17241.009 - 17341.834: 96.8056% ( 18) 00:07:54.214 17341.834 - 17442.658: 96.9705% ( 19) 00:07:54.214 17442.658 - 17543.483: 97.1962% ( 26) 00:07:54.214 17543.483 - 17644.308: 97.4219% ( 26) 00:07:54.214 17644.308 - 17745.132: 97.6562% ( 27) 00:07:54.214 17745.132 - 17845.957: 97.8819% ( 26) 00:07:54.214 17845.957 - 17946.782: 98.0642% ( 21) 00:07:54.214 17946.782 - 18047.606: 98.1944% ( 15) 00:07:54.214 18047.606 - 18148.431: 98.3073% ( 13) 00:07:54.214 18148.431 - 18249.255: 98.4028% ( 11) 00:07:54.214 18249.255 - 18350.080: 98.4809% ( 9) 00:07:54.214 18350.080 - 18450.905: 98.5590% ( 9) 00:07:54.214 18450.905 - 18551.729: 98.6111% ( 6) 00:07:54.214 18551.729 - 18652.554: 98.6372% ( 3) 00:07:54.214 18652.554 - 18753.378: 98.6719% ( 4) 00:07:54.214 18753.378 - 18854.203: 98.7326% ( 7) 00:07:54.214 18854.203 - 18955.028: 98.8021% ( 8) 00:07:54.214 18955.028 - 19055.852: 98.8715% ( 8) 00:07:54.214 19055.852 - 19156.677: 98.9236% ( 6) 00:07:54.214 19156.677 - 19257.502: 98.9931% ( 8) 00:07:54.214 19257.502 - 19358.326: 99.0538% ( 7) 00:07:54.214 19358.326 - 19459.151: 99.1059% ( 6) 00:07:54.214 19459.151 - 19559.975: 99.1406% ( 4) 00:07:54.214 19559.975 - 19660.800: 99.1667% ( 3) 00:07:54.214 19660.800 - 19761.625: 99.1927% ( 3) 00:07:54.214 19761.625 - 19862.449: 99.2188% ( 3) 00:07:54.214 19862.449 - 19963.274: 99.2448% ( 3) 00:07:54.214 19963.274 - 20064.098: 99.2795% ( 4) 00:07:54.214 20064.098 - 20164.923: 99.3056% ( 3) 00:07:54.214 20164.923 - 20265.748: 99.3316% ( 3) 00:07:54.214 20265.748 - 20366.572: 99.3663% ( 4) 00:07:54.214 20366.572 - 20467.397: 99.4010% ( 4) 00:07:54.214 20467.397 - 20568.222: 99.4358% ( 4) 00:07:54.214 20568.222 - 20669.046: 99.4444% ( 1) 00:07:54.214 22080.591 - 22181.415: 99.4705% ( 3) 00:07:54.214 22181.415 - 22282.240: 99.5139% ( 5) 00:07:54.214 22282.240 - 22383.065: 99.5573% ( 5) 00:07:54.214 22383.065 - 22483.889: 99.6007% ( 5) 00:07:54.214 22483.889 - 22584.714: 99.6528% ( 6) 00:07:54.214 22584.714 - 22685.538: 99.6962% ( 5) 00:07:54.214 22685.538 - 22786.363: 99.7396% ( 5) 00:07:54.214 22786.363 - 22887.188: 99.7830% ( 5) 00:07:54.214 22887.188 - 22988.012: 99.8090% ( 3) 00:07:54.214 22988.012 - 23088.837: 99.8351% ( 3) 00:07:54.214 23088.837 - 23189.662: 99.8785% ( 5) 00:07:54.214 23189.662 - 23290.486: 99.9306% ( 6) 00:07:54.214 23290.486 - 23391.311: 99.9740% ( 5) 00:07:54.214 23391.311 - 23492.135: 100.0000% ( 3) 00:07:54.214 00:07:54.214 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.214 ============================================================================== 00:07:54.214 Range in us Cumulative IO count 00:07:54.214 4511.902 - 4537.108: 0.0174% ( 2) 00:07:54.214 4537.108 - 4562.314: 0.0347% ( 2) 00:07:54.214 4562.314 - 4587.520: 0.0521% ( 2) 00:07:54.214 4587.520 - 4612.726: 0.0608% ( 1) 00:07:54.214 4612.726 - 4637.932: 0.0694% ( 1) 00:07:54.214 4637.932 - 4663.138: 0.0955% ( 3) 00:07:54.214 4663.138 - 4688.345: 0.1042% ( 1) 00:07:54.214 4688.345 - 4713.551: 0.1302% ( 3) 00:07:54.214 4713.551 - 4738.757: 0.1476% ( 2) 00:07:54.214 4738.757 - 4763.963: 0.1649% ( 2) 00:07:54.214 4763.963 - 4789.169: 0.1823% ( 2) 00:07:54.214 4789.169 - 4814.375: 0.1997% ( 2) 00:07:54.214 4814.375 - 4839.582: 0.2170% ( 2) 00:07:54.214 4839.582 - 4864.788: 0.2344% ( 2) 00:07:54.214 4864.788 - 4889.994: 0.2517% ( 2) 00:07:54.214 4889.994 - 4915.200: 0.2691% ( 2) 00:07:54.214 4915.200 - 4940.406: 0.2865% ( 2) 00:07:54.214 4940.406 - 4965.612: 0.3038% ( 2) 00:07:54.214 4965.612 - 4990.818: 0.3212% ( 2) 00:07:54.214 4990.818 - 5016.025: 0.3385% ( 2) 00:07:54.214 5016.025 - 5041.231: 0.3472% ( 1) 00:07:54.214 5041.231 - 5066.437: 0.3559% ( 1) 00:07:54.214 5066.437 - 5091.643: 0.3733% ( 2) 00:07:54.214 5091.643 - 5116.849: 0.3906% ( 2) 00:07:54.214 5116.849 - 5142.055: 0.4080% ( 2) 00:07:54.214 5142.055 - 5167.262: 0.4340% ( 3) 00:07:54.214 5167.262 - 5192.468: 0.4514% ( 2) 00:07:54.214 5192.468 - 5217.674: 0.4688% ( 2) 00:07:54.214 5217.674 - 5242.880: 0.4774% ( 1) 00:07:54.214 5242.880 - 5268.086: 0.5035% ( 3) 00:07:54.214 5268.086 - 5293.292: 0.5208% ( 2) 00:07:54.214 5293.292 - 5318.498: 0.5382% ( 2) 00:07:54.214 5318.498 - 5343.705: 0.5556% ( 2) 00:07:54.214 7763.495 - 7813.908: 0.5642% ( 1) 00:07:54.214 7813.908 - 7864.320: 0.5990% ( 4) 00:07:54.214 7864.320 - 7914.732: 0.6337% ( 4) 00:07:54.214 7914.732 - 7965.145: 0.6684% ( 4) 00:07:54.214 7965.145 - 8015.557: 0.7118% ( 5) 00:07:54.214 8015.557 - 8065.969: 0.7899% ( 9) 00:07:54.214 8065.969 - 8116.382: 0.8594% ( 8) 00:07:54.214 8116.382 - 8166.794: 0.9288% ( 8) 00:07:54.214 8166.794 - 8217.206: 0.9983% ( 8) 00:07:54.214 8217.206 - 8267.618: 1.1024% ( 12) 00:07:54.214 8267.618 - 8318.031: 1.3108% ( 24) 00:07:54.214 8318.031 - 8368.443: 1.4236% ( 13) 00:07:54.214 8368.443 - 8418.855: 1.5712% ( 17) 00:07:54.214 8418.855 - 8469.268: 1.7101% ( 16) 00:07:54.214 8469.268 - 8519.680: 1.9097% ( 23) 00:07:54.214 8519.680 - 8570.092: 2.1788% ( 31) 00:07:54.214 8570.092 - 8620.505: 2.3698% ( 22) 00:07:54.214 8620.505 - 8670.917: 2.6476% ( 32) 00:07:54.214 8670.917 - 8721.329: 2.9340% ( 33) 00:07:54.214 8721.329 - 8771.742: 3.3160% ( 44) 00:07:54.214 8771.742 - 8822.154: 3.6024% ( 33) 00:07:54.214 8822.154 - 8872.566: 4.0191% ( 48) 00:07:54.214 8872.566 - 8922.978: 4.5486% ( 61) 00:07:54.214 8922.978 - 8973.391: 5.1042% ( 64) 00:07:54.214 8973.391 - 9023.803: 5.6858% ( 67) 00:07:54.214 9023.803 - 9074.215: 6.3715% ( 79) 00:07:54.214 9074.215 - 9124.628: 7.1875% ( 94) 00:07:54.214 9124.628 - 9175.040: 8.1337% ( 109) 00:07:54.214 9175.040 - 9225.452: 9.4184% ( 148) 00:07:54.214 9225.452 - 9275.865: 10.9549% ( 177) 00:07:54.214 9275.865 - 9326.277: 12.4479% ( 172) 00:07:54.214 9326.277 - 9376.689: 14.1059% ( 191) 00:07:54.214 9376.689 - 9427.102: 15.9201% ( 209) 00:07:54.214 9427.102 - 9477.514: 17.6562% ( 200) 00:07:54.214 9477.514 - 9527.926: 19.6962% ( 235) 00:07:54.214 9527.926 - 9578.338: 21.6146% ( 221) 00:07:54.214 9578.338 - 9628.751: 23.7587% ( 247) 00:07:54.214 9628.751 - 9679.163: 25.9809% ( 256) 00:07:54.214 9679.163 - 9729.575: 28.2726% ( 264) 00:07:54.214 9729.575 - 9779.988: 30.5816% ( 266) 00:07:54.214 9779.988 - 9830.400: 32.9688% ( 275) 00:07:54.214 9830.400 - 9880.812: 35.3906% ( 279) 00:07:54.214 9880.812 - 9931.225: 37.6997% ( 266) 00:07:54.214 9931.225 - 9981.637: 40.1476% ( 282) 00:07:54.214 9981.637 - 10032.049: 42.5694% ( 279) 00:07:54.214 10032.049 - 10082.462: 45.0521% ( 286) 00:07:54.214 10082.462 - 10132.874: 47.3351% ( 263) 00:07:54.214 10132.874 - 10183.286: 49.5312% ( 253) 00:07:54.214 10183.286 - 10233.698: 51.6580% ( 245) 00:07:54.214 10233.698 - 10284.111: 53.5156% ( 214) 00:07:54.214 10284.111 - 10334.523: 55.3819% ( 215) 00:07:54.214 10334.523 - 10384.935: 57.0833% ( 196) 00:07:54.214 10384.935 - 10435.348: 58.7587% ( 193) 00:07:54.214 10435.348 - 10485.760: 60.3212% ( 180) 00:07:54.214 10485.760 - 10536.172: 61.7969% ( 170) 00:07:54.214 10536.172 - 10586.585: 63.2118% ( 163) 00:07:54.214 10586.585 - 10636.997: 64.4618% ( 144) 00:07:54.214 10636.997 - 10687.409: 65.6684% ( 139) 00:07:54.214 10687.409 - 10737.822: 66.6753% ( 116) 00:07:54.214 10737.822 - 10788.234: 67.7865% ( 128) 00:07:54.214 10788.234 - 10838.646: 68.9236% ( 131) 00:07:54.214 10838.646 - 10889.058: 69.9740% ( 121) 00:07:54.214 10889.058 - 10939.471: 70.8854% ( 105) 00:07:54.214 10939.471 - 10989.883: 71.6840% ( 92) 00:07:54.214 10989.883 - 11040.295: 72.5260% ( 97) 00:07:54.214 11040.295 - 11090.708: 73.2205% ( 80) 00:07:54.214 11090.708 - 11141.120: 73.8802% ( 76) 00:07:54.214 11141.120 - 11191.532: 74.4271% ( 63) 00:07:54.214 11191.532 - 11241.945: 75.0087% ( 67) 00:07:54.214 11241.945 - 11292.357: 75.5035% ( 57) 00:07:54.214 11292.357 - 11342.769: 75.9635% ( 53) 00:07:54.214 11342.769 - 11393.182: 76.4410% ( 55) 00:07:54.214 11393.182 - 11443.594: 76.7795% ( 39) 00:07:54.214 11443.594 - 11494.006: 77.1528% ( 43) 00:07:54.214 11494.006 - 11544.418: 77.5347% ( 44) 00:07:54.215 11544.418 - 11594.831: 77.8472% ( 36) 00:07:54.215 11594.831 - 11645.243: 78.1250% ( 32) 00:07:54.215 11645.243 - 11695.655: 78.4201% ( 34) 00:07:54.215 11695.655 - 11746.068: 78.6979% ( 32) 00:07:54.215 11746.068 - 11796.480: 78.9062% ( 24) 00:07:54.215 11796.480 - 11846.892: 79.1233% ( 25) 00:07:54.215 11846.892 - 11897.305: 79.3056% ( 21) 00:07:54.215 11897.305 - 11947.717: 79.4531% ( 17) 00:07:54.215 11947.717 - 11998.129: 79.6007% ( 17) 00:07:54.215 11998.129 - 12048.542: 79.7396% ( 16) 00:07:54.215 12048.542 - 12098.954: 79.8872% ( 17) 00:07:54.215 12098.954 - 12149.366: 80.0174% ( 15) 00:07:54.215 12149.366 - 12199.778: 80.2170% ( 23) 00:07:54.215 12199.778 - 12250.191: 80.3906% ( 20) 00:07:54.215 12250.191 - 12300.603: 80.5295% ( 16) 00:07:54.215 12300.603 - 12351.015: 80.6250% ( 11) 00:07:54.215 12351.015 - 12401.428: 80.7465% ( 14) 00:07:54.215 12401.428 - 12451.840: 80.8681% ( 14) 00:07:54.215 12451.840 - 12502.252: 80.9809% ( 13) 00:07:54.215 12502.252 - 12552.665: 81.0851% ( 12) 00:07:54.215 12552.665 - 12603.077: 81.1979% ( 13) 00:07:54.215 12603.077 - 12653.489: 81.2934% ( 11) 00:07:54.215 12653.489 - 12703.902: 81.3715% ( 9) 00:07:54.215 12703.902 - 12754.314: 81.4583% ( 10) 00:07:54.215 12754.314 - 12804.726: 81.5104% ( 6) 00:07:54.215 12804.726 - 12855.138: 81.5451% ( 4) 00:07:54.215 12855.138 - 12905.551: 81.5885% ( 5) 00:07:54.215 12905.551 - 13006.375: 81.7795% ( 22) 00:07:54.215 13006.375 - 13107.200: 82.1007% ( 37) 00:07:54.215 13107.200 - 13208.025: 82.4132% ( 36) 00:07:54.215 13208.025 - 13308.849: 82.7517% ( 39) 00:07:54.215 13308.849 - 13409.674: 83.1076% ( 41) 00:07:54.215 13409.674 - 13510.498: 83.4288% ( 37) 00:07:54.215 13510.498 - 13611.323: 83.7760% ( 40) 00:07:54.215 13611.323 - 13712.148: 84.2708% ( 57) 00:07:54.215 13712.148 - 13812.972: 84.6615% ( 45) 00:07:54.215 13812.972 - 13913.797: 85.0781% ( 48) 00:07:54.215 13913.797 - 14014.622: 85.4253% ( 40) 00:07:54.215 14014.622 - 14115.446: 85.6510% ( 26) 00:07:54.215 14115.446 - 14216.271: 85.9462% ( 34) 00:07:54.215 14216.271 - 14317.095: 86.2326% ( 33) 00:07:54.215 14317.095 - 14417.920: 86.5365% ( 35) 00:07:54.215 14417.920 - 14518.745: 86.8576% ( 37) 00:07:54.215 14518.745 - 14619.569: 87.1267% ( 31) 00:07:54.215 14619.569 - 14720.394: 87.4653% ( 39) 00:07:54.215 14720.394 - 14821.218: 87.8125% ( 40) 00:07:54.215 14821.218 - 14922.043: 88.2552% ( 51) 00:07:54.215 14922.043 - 15022.868: 88.7674% ( 59) 00:07:54.215 15022.868 - 15123.692: 89.1927% ( 49) 00:07:54.215 15123.692 - 15224.517: 89.5920% ( 46) 00:07:54.215 15224.517 - 15325.342: 89.9826% ( 45) 00:07:54.215 15325.342 - 15426.166: 90.3646% ( 44) 00:07:54.215 15426.166 - 15526.991: 90.7899% ( 49) 00:07:54.215 15526.991 - 15627.815: 91.1979% ( 47) 00:07:54.215 15627.815 - 15728.640: 91.6406% ( 51) 00:07:54.215 15728.640 - 15829.465: 92.1007% ( 53) 00:07:54.215 15829.465 - 15930.289: 92.5521% ( 52) 00:07:54.215 15930.289 - 16031.114: 92.9774% ( 49) 00:07:54.215 16031.114 - 16131.938: 93.3333% ( 41) 00:07:54.215 16131.938 - 16232.763: 93.6806% ( 40) 00:07:54.215 16232.763 - 16333.588: 94.0538% ( 43) 00:07:54.215 16333.588 - 16434.412: 94.4010% ( 40) 00:07:54.215 16434.412 - 16535.237: 94.7396% ( 39) 00:07:54.215 16535.237 - 16636.062: 95.1042% ( 42) 00:07:54.215 16636.062 - 16736.886: 95.4861% ( 44) 00:07:54.215 16736.886 - 16837.711: 95.8160% ( 38) 00:07:54.215 16837.711 - 16938.535: 96.1198% ( 35) 00:07:54.215 16938.535 - 17039.360: 96.3976% ( 32) 00:07:54.215 17039.360 - 17140.185: 96.6753% ( 32) 00:07:54.215 17140.185 - 17241.009: 96.9792% ( 35) 00:07:54.215 17241.009 - 17341.834: 97.3351% ( 41) 00:07:54.215 17341.834 - 17442.658: 97.6128% ( 32) 00:07:54.215 17442.658 - 17543.483: 97.8385% ( 26) 00:07:54.215 17543.483 - 17644.308: 98.0295% ( 22) 00:07:54.215 17644.308 - 17745.132: 98.1597% ( 15) 00:07:54.215 17745.132 - 17845.957: 98.2986% ( 16) 00:07:54.215 17845.957 - 17946.782: 98.4375% ( 16) 00:07:54.215 17946.782 - 18047.606: 98.5503% ( 13) 00:07:54.215 18047.606 - 18148.431: 98.6024% ( 6) 00:07:54.215 18148.431 - 18249.255: 98.6372% ( 4) 00:07:54.215 18249.255 - 18350.080: 98.6719% ( 4) 00:07:54.215 18350.080 - 18450.905: 98.6979% ( 3) 00:07:54.215 18450.905 - 18551.729: 98.7240% ( 3) 00:07:54.215 18551.729 - 18652.554: 98.7500% ( 3) 00:07:54.215 18652.554 - 18753.378: 98.7847% ( 4) 00:07:54.215 18753.378 - 18854.203: 98.8108% ( 3) 00:07:54.215 18854.203 - 18955.028: 98.8628% ( 6) 00:07:54.215 18955.028 - 19055.852: 98.9149% ( 6) 00:07:54.215 19055.852 - 19156.677: 98.9670% ( 6) 00:07:54.215 19156.677 - 19257.502: 99.0017% ( 4) 00:07:54.215 19257.502 - 19358.326: 99.0365% ( 4) 00:07:54.215 19358.326 - 19459.151: 99.0625% ( 3) 00:07:54.215 19459.151 - 19559.975: 99.0972% ( 4) 00:07:54.215 19559.975 - 19660.800: 99.1233% ( 3) 00:07:54.215 19660.800 - 19761.625: 99.1580% ( 4) 00:07:54.215 19761.625 - 19862.449: 99.1927% ( 4) 00:07:54.215 19862.449 - 19963.274: 99.2188% ( 3) 00:07:54.215 19963.274 - 20064.098: 99.2535% ( 4) 00:07:54.215 20064.098 - 20164.923: 99.2795% ( 3) 00:07:54.215 20164.923 - 20265.748: 99.3142% ( 4) 00:07:54.215 20265.748 - 20366.572: 99.3490% ( 4) 00:07:54.215 20366.572 - 20467.397: 99.3750% ( 3) 00:07:54.215 20467.397 - 20568.222: 99.4010% ( 3) 00:07:54.215 20568.222 - 20669.046: 99.4271% ( 3) 00:07:54.215 20669.046 - 20769.871: 99.4444% ( 2) 00:07:54.215 21677.292 - 21778.117: 99.4531% ( 1) 00:07:54.215 21778.117 - 21878.942: 99.4965% ( 5) 00:07:54.215 21878.942 - 21979.766: 99.5486% ( 6) 00:07:54.215 21979.766 - 22080.591: 99.5920% ( 5) 00:07:54.215 22080.591 - 22181.415: 99.6441% ( 6) 00:07:54.215 22181.415 - 22282.240: 99.6875% ( 5) 00:07:54.215 22282.240 - 22383.065: 99.7309% ( 5) 00:07:54.215 22383.065 - 22483.889: 99.7656% ( 4) 00:07:54.215 22483.889 - 22584.714: 99.8177% ( 6) 00:07:54.215 22584.714 - 22685.538: 99.8611% ( 5) 00:07:54.215 22685.538 - 22786.363: 99.9132% ( 6) 00:07:54.215 22786.363 - 22887.188: 99.9566% ( 5) 00:07:54.215 22887.188 - 22988.012: 99.9913% ( 4) 00:07:54.215 22988.012 - 23088.837: 100.0000% ( 1) 00:07:54.215 00:07:54.215 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.215 ============================================================================== 00:07:54.215 Range in us Cumulative IO count 00:07:54.215 4007.778 - 4032.985: 0.0087% ( 1) 00:07:54.215 4032.985 - 4058.191: 0.0434% ( 4) 00:07:54.215 4058.191 - 4083.397: 0.0521% ( 1) 00:07:54.215 4083.397 - 4108.603: 0.0608% ( 1) 00:07:54.215 4108.603 - 4133.809: 0.0781% ( 2) 00:07:54.215 4133.809 - 4159.015: 0.0955% ( 2) 00:07:54.215 4159.015 - 4184.222: 0.1128% ( 2) 00:07:54.215 4184.222 - 4209.428: 0.1389% ( 3) 00:07:54.215 4209.428 - 4234.634: 0.1562% ( 2) 00:07:54.215 4234.634 - 4259.840: 0.1649% ( 1) 00:07:54.215 4259.840 - 4285.046: 0.1823% ( 2) 00:07:54.215 4285.046 - 4310.252: 0.1997% ( 2) 00:07:54.215 4310.252 - 4335.458: 0.2257% ( 3) 00:07:54.215 4335.458 - 4360.665: 0.2431% ( 2) 00:07:54.215 4360.665 - 4385.871: 0.2604% ( 2) 00:07:54.215 4385.871 - 4411.077: 0.2691% ( 1) 00:07:54.215 4411.077 - 4436.283: 0.2865% ( 2) 00:07:54.215 4436.283 - 4461.489: 0.3038% ( 2) 00:07:54.215 4461.489 - 4486.695: 0.3212% ( 2) 00:07:54.215 4486.695 - 4511.902: 0.3385% ( 2) 00:07:54.215 4511.902 - 4537.108: 0.3559% ( 2) 00:07:54.215 4537.108 - 4562.314: 0.3733% ( 2) 00:07:54.215 4562.314 - 4587.520: 0.3906% ( 2) 00:07:54.215 4587.520 - 4612.726: 0.4080% ( 2) 00:07:54.215 4612.726 - 4637.932: 0.4253% ( 2) 00:07:54.215 4637.932 - 4663.138: 0.4514% ( 3) 00:07:54.215 4663.138 - 4688.345: 0.4688% ( 2) 00:07:54.215 4688.345 - 4713.551: 0.4861% ( 2) 00:07:54.215 4713.551 - 4738.757: 0.5035% ( 2) 00:07:54.215 4738.757 - 4763.963: 0.5208% ( 2) 00:07:54.215 4763.963 - 4789.169: 0.5382% ( 2) 00:07:54.215 4789.169 - 4814.375: 0.5556% ( 2) 00:07:54.215 7259.372 - 7309.785: 0.5642% ( 1) 00:07:54.215 7309.785 - 7360.197: 0.5990% ( 4) 00:07:54.215 7360.197 - 7410.609: 0.6250% ( 3) 00:07:54.215 7410.609 - 7461.022: 0.6597% ( 4) 00:07:54.215 7461.022 - 7511.434: 0.7031% ( 5) 00:07:54.215 7511.434 - 7561.846: 0.7378% ( 4) 00:07:54.215 7561.846 - 7612.258: 0.7726% ( 4) 00:07:54.215 7612.258 - 7662.671: 0.8073% ( 4) 00:07:54.215 7662.671 - 7713.083: 0.8420% ( 4) 00:07:54.215 7713.083 - 7763.495: 0.8854% ( 5) 00:07:54.215 7763.495 - 7813.908: 0.9201% ( 4) 00:07:54.215 7813.908 - 7864.320: 0.9549% ( 4) 00:07:54.215 7864.320 - 7914.732: 0.9983% ( 5) 00:07:54.215 7914.732 - 7965.145: 1.0330% ( 4) 00:07:54.215 7965.145 - 8015.557: 1.0677% ( 4) 00:07:54.215 8015.557 - 8065.969: 1.1198% ( 6) 00:07:54.215 8065.969 - 8116.382: 1.1892% ( 8) 00:07:54.215 8116.382 - 8166.794: 1.2326% ( 5) 00:07:54.215 8166.794 - 8217.206: 1.3368% ( 12) 00:07:54.215 8217.206 - 8267.618: 1.4583% ( 14) 00:07:54.215 8267.618 - 8318.031: 1.5712% ( 13) 00:07:54.215 8318.031 - 8368.443: 1.7101% ( 16) 00:07:54.215 8368.443 - 8418.855: 1.8663% ( 18) 00:07:54.215 8418.855 - 8469.268: 2.0052% ( 16) 00:07:54.215 8469.268 - 8519.680: 2.1701% ( 19) 00:07:54.215 8519.680 - 8570.092: 2.3351% ( 19) 00:07:54.215 8570.092 - 8620.505: 2.5087% ( 20) 00:07:54.215 8620.505 - 8670.917: 2.7170% ( 24) 00:07:54.215 8670.917 - 8721.329: 2.9601% ( 28) 00:07:54.215 8721.329 - 8771.742: 3.2639% ( 35) 00:07:54.215 8771.742 - 8822.154: 3.6545% ( 45) 00:07:54.215 8822.154 - 8872.566: 4.0799% ( 49) 00:07:54.215 8872.566 - 8922.978: 4.5139% ( 50) 00:07:54.215 8922.978 - 8973.391: 5.0955% ( 67) 00:07:54.215 8973.391 - 9023.803: 5.8941% ( 92) 00:07:54.215 9023.803 - 9074.215: 6.7101% ( 94) 00:07:54.215 9074.215 - 9124.628: 7.7778% ( 123) 00:07:54.215 9124.628 - 9175.040: 8.8368% ( 122) 00:07:54.215 9175.040 - 9225.452: 9.9219% ( 125) 00:07:54.215 9225.452 - 9275.865: 11.1979% ( 147) 00:07:54.215 9275.865 - 9326.277: 12.6736% ( 170) 00:07:54.216 9326.277 - 9376.689: 14.3403% ( 192) 00:07:54.216 9376.689 - 9427.102: 16.0243% ( 194) 00:07:54.216 9427.102 - 9477.514: 17.9861% ( 226) 00:07:54.216 9477.514 - 9527.926: 19.8438% ( 214) 00:07:54.216 9527.926 - 9578.338: 21.7969% ( 225) 00:07:54.216 9578.338 - 9628.751: 23.9062% ( 243) 00:07:54.216 9628.751 - 9679.163: 26.2934% ( 275) 00:07:54.216 9679.163 - 9729.575: 28.6372% ( 270) 00:07:54.216 9729.575 - 9779.988: 30.9462% ( 266) 00:07:54.216 9779.988 - 9830.400: 33.4288% ( 286) 00:07:54.216 9830.400 - 9880.812: 35.7205% ( 264) 00:07:54.216 9880.812 - 9931.225: 38.1163% ( 276) 00:07:54.216 9931.225 - 9981.637: 40.3906% ( 262) 00:07:54.216 9981.637 - 10032.049: 42.6910% ( 265) 00:07:54.216 10032.049 - 10082.462: 44.8438% ( 248) 00:07:54.216 10082.462 - 10132.874: 47.0312% ( 252) 00:07:54.216 10132.874 - 10183.286: 49.2101% ( 251) 00:07:54.216 10183.286 - 10233.698: 51.3108% ( 242) 00:07:54.216 10233.698 - 10284.111: 53.2986% ( 229) 00:07:54.216 10284.111 - 10334.523: 55.2691% ( 227) 00:07:54.216 10334.523 - 10384.935: 57.0312% ( 203) 00:07:54.216 10384.935 - 10435.348: 58.6545% ( 187) 00:07:54.216 10435.348 - 10485.760: 60.3038% ( 190) 00:07:54.216 10485.760 - 10536.172: 61.8490% ( 178) 00:07:54.216 10536.172 - 10586.585: 63.1163% ( 146) 00:07:54.216 10586.585 - 10636.997: 64.4010% ( 148) 00:07:54.216 10636.997 - 10687.409: 65.5122% ( 128) 00:07:54.216 10687.409 - 10737.822: 66.5451% ( 119) 00:07:54.216 10737.822 - 10788.234: 67.5781% ( 119) 00:07:54.216 10788.234 - 10838.646: 68.6024% ( 118) 00:07:54.216 10838.646 - 10889.058: 69.5486% ( 109) 00:07:54.216 10889.058 - 10939.471: 70.5122% ( 111) 00:07:54.216 10939.471 - 10989.883: 71.1979% ( 79) 00:07:54.216 10989.883 - 11040.295: 71.8663% ( 77) 00:07:54.216 11040.295 - 11090.708: 72.4479% ( 67) 00:07:54.216 11090.708 - 11141.120: 73.0382% ( 68) 00:07:54.216 11141.120 - 11191.532: 73.6198% ( 67) 00:07:54.216 11191.532 - 11241.945: 74.1059% ( 56) 00:07:54.216 11241.945 - 11292.357: 74.6181% ( 59) 00:07:54.216 11292.357 - 11342.769: 75.1042% ( 56) 00:07:54.216 11342.769 - 11393.182: 75.4948% ( 45) 00:07:54.216 11393.182 - 11443.594: 75.9115% ( 48) 00:07:54.216 11443.594 - 11494.006: 76.2760% ( 42) 00:07:54.216 11494.006 - 11544.418: 76.6753% ( 46) 00:07:54.216 11544.418 - 11594.831: 77.0573% ( 44) 00:07:54.216 11594.831 - 11645.243: 77.4740% ( 48) 00:07:54.216 11645.243 - 11695.655: 77.8385% ( 42) 00:07:54.216 11695.655 - 11746.068: 78.1163% ( 32) 00:07:54.216 11746.068 - 11796.480: 78.3854% ( 31) 00:07:54.216 11796.480 - 11846.892: 78.6545% ( 31) 00:07:54.216 11846.892 - 11897.305: 78.8802% ( 26) 00:07:54.216 11897.305 - 11947.717: 79.2014% ( 37) 00:07:54.216 11947.717 - 11998.129: 79.4358% ( 27) 00:07:54.216 11998.129 - 12048.542: 79.6875% ( 29) 00:07:54.216 12048.542 - 12098.954: 79.9392% ( 29) 00:07:54.216 12098.954 - 12149.366: 80.1736% ( 27) 00:07:54.216 12149.366 - 12199.778: 80.4080% ( 27) 00:07:54.216 12199.778 - 12250.191: 80.6076% ( 23) 00:07:54.216 12250.191 - 12300.603: 80.7465% ( 16) 00:07:54.216 12300.603 - 12351.015: 80.8767% ( 15) 00:07:54.216 12351.015 - 12401.428: 80.9983% ( 14) 00:07:54.216 12401.428 - 12451.840: 81.1111% ( 13) 00:07:54.216 12451.840 - 12502.252: 81.2066% ( 11) 00:07:54.216 12502.252 - 12552.665: 81.2934% ( 10) 00:07:54.216 12552.665 - 12603.077: 81.4323% ( 16) 00:07:54.216 12603.077 - 12653.489: 81.5625% ( 15) 00:07:54.216 12653.489 - 12703.902: 81.6927% ( 15) 00:07:54.216 12703.902 - 12754.314: 81.8056% ( 13) 00:07:54.216 12754.314 - 12804.726: 81.9271% ( 14) 00:07:54.216 12804.726 - 12855.138: 82.0312% ( 12) 00:07:54.216 12855.138 - 12905.551: 82.1354% ( 12) 00:07:54.216 12905.551 - 13006.375: 82.3438% ( 24) 00:07:54.216 13006.375 - 13107.200: 82.5868% ( 28) 00:07:54.216 13107.200 - 13208.025: 82.8472% ( 30) 00:07:54.216 13208.025 - 13308.849: 83.0903% ( 28) 00:07:54.216 13308.849 - 13409.674: 83.2986% ( 24) 00:07:54.216 13409.674 - 13510.498: 83.5851% ( 33) 00:07:54.216 13510.498 - 13611.323: 83.9149% ( 38) 00:07:54.216 13611.323 - 13712.148: 84.2535% ( 39) 00:07:54.216 13712.148 - 13812.972: 84.6007% ( 40) 00:07:54.216 13812.972 - 13913.797: 84.9306% ( 38) 00:07:54.216 13913.797 - 14014.622: 85.3038% ( 43) 00:07:54.216 14014.622 - 14115.446: 85.6771% ( 43) 00:07:54.216 14115.446 - 14216.271: 86.0851% ( 47) 00:07:54.216 14216.271 - 14317.095: 86.5017% ( 48) 00:07:54.216 14317.095 - 14417.920: 87.0139% ( 59) 00:07:54.216 14417.920 - 14518.745: 87.3958% ( 44) 00:07:54.216 14518.745 - 14619.569: 87.6910% ( 34) 00:07:54.216 14619.569 - 14720.394: 88.0208% ( 38) 00:07:54.216 14720.394 - 14821.218: 88.3767% ( 41) 00:07:54.216 14821.218 - 14922.043: 88.7066% ( 38) 00:07:54.216 14922.043 - 15022.868: 89.0451% ( 39) 00:07:54.216 15022.868 - 15123.692: 89.3576% ( 36) 00:07:54.216 15123.692 - 15224.517: 89.6528% ( 34) 00:07:54.216 15224.517 - 15325.342: 89.9653% ( 36) 00:07:54.216 15325.342 - 15426.166: 90.2691% ( 35) 00:07:54.216 15426.166 - 15526.991: 90.5208% ( 29) 00:07:54.216 15526.991 - 15627.815: 90.7378% ( 25) 00:07:54.216 15627.815 - 15728.640: 91.0069% ( 31) 00:07:54.216 15728.640 - 15829.465: 91.3715% ( 42) 00:07:54.216 15829.465 - 15930.289: 91.9097% ( 62) 00:07:54.216 15930.289 - 16031.114: 92.4045% ( 57) 00:07:54.216 16031.114 - 16131.938: 92.8993% ( 57) 00:07:54.216 16131.938 - 16232.763: 93.3247% ( 49) 00:07:54.216 16232.763 - 16333.588: 93.7413% ( 48) 00:07:54.216 16333.588 - 16434.412: 94.0799% ( 39) 00:07:54.216 16434.412 - 16535.237: 94.4271% ( 40) 00:07:54.216 16535.237 - 16636.062: 94.8872% ( 53) 00:07:54.216 16636.062 - 16736.886: 95.3906% ( 58) 00:07:54.216 16736.886 - 16837.711: 95.8247% ( 50) 00:07:54.216 16837.711 - 16938.535: 96.2066% ( 44) 00:07:54.216 16938.535 - 17039.360: 96.5885% ( 44) 00:07:54.216 17039.360 - 17140.185: 96.8924% ( 35) 00:07:54.216 17140.185 - 17241.009: 97.1615% ( 31) 00:07:54.216 17241.009 - 17341.834: 97.4219% ( 30) 00:07:54.216 17341.834 - 17442.658: 97.6823% ( 30) 00:07:54.216 17442.658 - 17543.483: 97.9340% ( 29) 00:07:54.216 17543.483 - 17644.308: 98.1944% ( 30) 00:07:54.216 17644.308 - 17745.132: 98.3941% ( 23) 00:07:54.216 17745.132 - 17845.957: 98.5156% ( 14) 00:07:54.216 17845.957 - 17946.782: 98.5851% ( 8) 00:07:54.216 17946.782 - 18047.606: 98.6458% ( 7) 00:07:54.216 18047.606 - 18148.431: 98.7153% ( 8) 00:07:54.216 18148.431 - 18249.255: 98.7847% ( 8) 00:07:54.216 18249.255 - 18350.080: 98.8108% ( 3) 00:07:54.216 18350.080 - 18450.905: 98.8455% ( 4) 00:07:54.216 18450.905 - 18551.729: 98.8802% ( 4) 00:07:54.216 18551.729 - 18652.554: 98.8889% ( 1) 00:07:54.216 19055.852 - 19156.677: 98.9149% ( 3) 00:07:54.216 19156.677 - 19257.502: 98.9497% ( 4) 00:07:54.216 19257.502 - 19358.326: 98.9844% ( 4) 00:07:54.216 19358.326 - 19459.151: 99.0191% ( 4) 00:07:54.216 19459.151 - 19559.975: 99.0538% ( 4) 00:07:54.216 19559.975 - 19660.800: 99.0972% ( 5) 00:07:54.216 19660.800 - 19761.625: 99.1580% ( 7) 00:07:54.216 19761.625 - 19862.449: 99.2188% ( 7) 00:07:54.216 19862.449 - 19963.274: 99.2795% ( 7) 00:07:54.216 19963.274 - 20064.098: 99.3403% ( 7) 00:07:54.216 20064.098 - 20164.923: 99.4010% ( 7) 00:07:54.216 20164.923 - 20265.748: 99.4444% ( 5) 00:07:54.216 21374.818 - 21475.643: 99.4792% ( 4) 00:07:54.216 21475.643 - 21576.468: 99.5139% ( 4) 00:07:54.216 21576.468 - 21677.292: 99.5660% ( 6) 00:07:54.216 21677.292 - 21778.117: 99.6094% ( 5) 00:07:54.216 21778.117 - 21878.942: 99.6528% ( 5) 00:07:54.216 21878.942 - 21979.766: 99.6962% ( 5) 00:07:54.216 21979.766 - 22080.591: 99.7483% ( 6) 00:07:54.216 22080.591 - 22181.415: 99.7917% ( 5) 00:07:54.216 22181.415 - 22282.240: 99.8264% ( 4) 00:07:54.216 22282.240 - 22383.065: 99.8698% ( 5) 00:07:54.216 22383.065 - 22483.889: 99.9219% ( 6) 00:07:54.216 22483.889 - 22584.714: 99.9653% ( 5) 00:07:54.216 22584.714 - 22685.538: 100.0000% ( 4) 00:07:54.216 00:07:54.216 18:02:28 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:55.164 Initializing NVMe Controllers 00:07:55.164 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:55.164 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:55.164 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:55.164 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:55.164 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:55.164 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:55.164 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:55.164 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:55.164 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:55.164 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:55.164 Initialization complete. Launching workers. 00:07:55.164 ======================================================== 00:07:55.164 Latency(us) 00:07:55.164 Device Information : IOPS MiB/s Average min max 00:07:55.164 PCIE (0000:00:10.0) NSID 1 from core 0: 13715.78 160.73 9345.50 7092.81 30119.15 00:07:55.164 PCIE (0000:00:11.0) NSID 1 from core 0: 13715.78 160.73 9339.13 7227.42 29621.38 00:07:55.164 PCIE (0000:00:13.0) NSID 1 from core 0: 13715.78 160.73 9332.63 7314.07 29801.74 00:07:55.164 PCIE (0000:00:12.0) NSID 1 from core 0: 13715.78 160.73 9326.62 7349.23 28859.01 00:07:55.164 PCIE (0000:00:12.0) NSID 2 from core 0: 13715.78 160.73 9319.60 7194.23 28697.83 00:07:55.164 PCIE (0000:00:12.0) NSID 3 from core 0: 13715.78 160.73 9312.46 7041.90 28084.89 00:07:55.164 ======================================================== 00:07:55.164 Total : 82294.69 964.39 9329.32 7041.90 30119.15 00:07:55.164 00:07:55.164 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:55.164 ================================================================================= 00:07:55.164 1.00000% : 7511.434us 00:07:55.164 10.00000% : 7864.320us 00:07:55.164 25.00000% : 8217.206us 00:07:55.164 50.00000% : 8822.154us 00:07:55.164 75.00000% : 9830.400us 00:07:55.164 90.00000% : 11241.945us 00:07:55.164 95.00000% : 12250.191us 00:07:55.164 98.00000% : 14821.218us 00:07:55.164 99.00000% : 16131.938us 00:07:55.164 99.50000% : 22483.889us 00:07:55.164 99.90000% : 29844.086us 00:07:55.164 99.99000% : 30247.385us 00:07:55.164 99.99900% : 30247.385us 00:07:55.164 99.99990% : 30247.385us 00:07:55.164 99.99999% : 30247.385us 00:07:55.164 00:07:55.164 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:55.164 ================================================================================= 00:07:55.164 1.00000% : 7612.258us 00:07:55.164 10.00000% : 7965.145us 00:07:55.164 25.00000% : 8217.206us 00:07:55.164 50.00000% : 8822.154us 00:07:55.164 75.00000% : 9729.575us 00:07:55.164 90.00000% : 11292.357us 00:07:55.164 95.00000% : 12351.015us 00:07:55.164 98.00000% : 14619.569us 00:07:55.164 99.00000% : 16232.763us 00:07:55.164 99.50000% : 22584.714us 00:07:55.164 99.90000% : 29440.788us 00:07:55.164 99.99000% : 29642.437us 00:07:55.164 99.99900% : 29642.437us 00:07:55.164 99.99990% : 29642.437us 00:07:55.164 99.99999% : 29642.437us 00:07:55.164 00:07:55.164 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:55.164 ================================================================================= 00:07:55.164 1.00000% : 7662.671us 00:07:55.164 10.00000% : 7965.145us 00:07:55.164 25.00000% : 8217.206us 00:07:55.164 50.00000% : 8822.154us 00:07:55.164 75.00000% : 9729.575us 00:07:55.164 90.00000% : 11292.357us 00:07:55.164 95.00000% : 12502.252us 00:07:55.164 98.00000% : 14417.920us 00:07:55.164 99.00000% : 16434.412us 00:07:55.164 99.50000% : 22786.363us 00:07:55.164 99.90000% : 29642.437us 00:07:55.164 99.99000% : 29844.086us 00:07:55.164 99.99900% : 29844.086us 00:07:55.164 99.99990% : 29844.086us 00:07:55.164 99.99999% : 29844.086us 00:07:55.164 00:07:55.164 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:55.164 ================================================================================= 00:07:55.164 1.00000% : 7662.671us 00:07:55.164 10.00000% : 7965.145us 00:07:55.164 25.00000% : 8217.206us 00:07:55.164 50.00000% : 8822.154us 00:07:55.164 75.00000% : 9779.988us 00:07:55.164 90.00000% : 11191.532us 00:07:55.164 95.00000% : 12552.665us 00:07:55.164 98.00000% : 14216.271us 00:07:55.164 99.00000% : 16333.588us 00:07:55.164 99.50000% : 22887.188us 00:07:55.164 99.90000% : 28634.191us 00:07:55.164 99.99000% : 28835.840us 00:07:55.164 99.99900% : 29037.489us 00:07:55.164 99.99990% : 29037.489us 00:07:55.164 99.99999% : 29037.489us 00:07:55.164 00:07:55.164 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:55.164 ================================================================================= 00:07:55.164 1.00000% : 7662.671us 00:07:55.164 10.00000% : 7965.145us 00:07:55.164 25.00000% : 8217.206us 00:07:55.164 50.00000% : 8771.742us 00:07:55.164 75.00000% : 9729.575us 00:07:55.164 90.00000% : 11040.295us 00:07:55.164 95.00000% : 12300.603us 00:07:55.164 98.00000% : 14417.920us 00:07:55.164 99.00000% : 16232.763us 00:07:55.164 99.50000% : 22282.240us 00:07:55.164 99.90000% : 28432.542us 00:07:55.164 99.99000% : 28835.840us 00:07:55.164 99.99900% : 28835.840us 00:07:55.164 99.99990% : 28835.840us 00:07:55.164 99.99999% : 28835.840us 00:07:55.164 00:07:55.164 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:55.164 ================================================================================= 00:07:55.164 1.00000% : 7612.258us 00:07:55.164 10.00000% : 7965.145us 00:07:55.164 25.00000% : 8217.206us 00:07:55.164 50.00000% : 8771.742us 00:07:55.164 75.00000% : 9779.988us 00:07:55.164 90.00000% : 11141.120us 00:07:55.164 95.00000% : 12098.954us 00:07:55.164 98.00000% : 14821.218us 00:07:55.164 99.00000% : 15829.465us 00:07:55.164 99.50000% : 22080.591us 00:07:55.164 99.90000% : 27827.594us 00:07:55.164 99.99000% : 28230.892us 00:07:55.164 99.99900% : 28230.892us 00:07:55.164 99.99990% : 28230.892us 00:07:55.164 99.99999% : 28230.892us 00:07:55.164 00:07:55.164 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:55.164 ============================================================================== 00:07:55.164 Range in us Cumulative IO count 00:07:55.164 7057.723 - 7108.135: 0.0072% ( 1) 00:07:55.164 7108.135 - 7158.548: 0.0217% ( 2) 00:07:55.164 7158.548 - 7208.960: 0.0796% ( 8) 00:07:55.164 7208.960 - 7259.372: 0.1736% ( 13) 00:07:55.164 7259.372 - 7309.785: 0.2459% ( 10) 00:07:55.164 7309.785 - 7360.197: 0.5208% ( 38) 00:07:55.164 7360.197 - 7410.609: 0.7812% ( 36) 00:07:55.164 7410.609 - 7461.022: 0.9766% ( 27) 00:07:55.164 7461.022 - 7511.434: 1.2514% ( 38) 00:07:55.164 7511.434 - 7561.846: 1.6927% ( 61) 00:07:55.164 7561.846 - 7612.258: 2.4957% ( 111) 00:07:55.164 7612.258 - 7662.671: 3.5880% ( 151) 00:07:55.164 7662.671 - 7713.083: 5.0926% ( 208) 00:07:55.164 7713.083 - 7763.495: 6.9155% ( 252) 00:07:55.164 7763.495 - 7813.908: 8.9410% ( 280) 00:07:55.164 7813.908 - 7864.320: 10.9447% ( 277) 00:07:55.164 7864.320 - 7914.732: 12.8328% ( 261) 00:07:55.164 7914.732 - 7965.145: 14.7280% ( 262) 00:07:55.164 7965.145 - 8015.557: 16.7318% ( 277) 00:07:55.164 8015.557 - 8065.969: 18.7934% ( 285) 00:07:55.165 8065.969 - 8116.382: 20.9635% ( 300) 00:07:55.165 8116.382 - 8166.794: 23.1264% ( 299) 00:07:55.165 8166.794 - 8217.206: 25.3762% ( 311) 00:07:55.165 8217.206 - 8267.618: 27.6114% ( 309) 00:07:55.165 8267.618 - 8318.031: 29.7815% ( 300) 00:07:55.165 8318.031 - 8368.443: 32.1904% ( 333) 00:07:55.165 8368.443 - 8418.855: 34.3388% ( 297) 00:07:55.165 8418.855 - 8469.268: 36.5379% ( 304) 00:07:55.165 8469.268 - 8519.680: 38.8383% ( 318) 00:07:55.165 8519.680 - 8570.092: 41.0663% ( 308) 00:07:55.165 8570.092 - 8620.505: 43.4028% ( 323) 00:07:55.165 8620.505 - 8670.917: 45.5006% ( 290) 00:07:55.165 8670.917 - 8721.329: 47.5550% ( 284) 00:07:55.165 8721.329 - 8771.742: 49.2622% ( 236) 00:07:55.165 8771.742 - 8822.154: 50.8536% ( 220) 00:07:55.165 8822.154 - 8872.566: 52.3365% ( 205) 00:07:55.165 8872.566 - 8922.978: 53.9135% ( 218) 00:07:55.165 8922.978 - 8973.391: 55.3313% ( 196) 00:07:55.165 8973.391 - 9023.803: 56.9155% ( 219) 00:07:55.165 9023.803 - 9074.215: 58.5069% ( 220) 00:07:55.165 9074.215 - 9124.628: 59.8669% ( 188) 00:07:55.165 9124.628 - 9175.040: 61.3281% ( 202) 00:07:55.165 9175.040 - 9225.452: 62.5217% ( 165) 00:07:55.165 9225.452 - 9275.865: 63.7297% ( 167) 00:07:55.165 9275.865 - 9326.277: 64.7497% ( 141) 00:07:55.165 9326.277 - 9376.689: 65.8348% ( 150) 00:07:55.165 9376.689 - 9427.102: 67.1441% ( 181) 00:07:55.165 9427.102 - 9477.514: 68.3594% ( 168) 00:07:55.165 9477.514 - 9527.926: 69.5168% ( 160) 00:07:55.165 9527.926 - 9578.338: 70.7104% ( 165) 00:07:55.165 9578.338 - 9628.751: 71.7448% ( 143) 00:07:55.165 9628.751 - 9679.163: 72.5116% ( 106) 00:07:55.165 9679.163 - 9729.575: 73.4881% ( 135) 00:07:55.165 9729.575 - 9779.988: 74.4430% ( 132) 00:07:55.165 9779.988 - 9830.400: 75.6366% ( 165) 00:07:55.165 9830.400 - 9880.812: 76.5987% ( 133) 00:07:55.165 9880.812 - 9931.225: 77.5897% ( 137) 00:07:55.165 9931.225 - 9981.637: 78.5735% ( 136) 00:07:55.165 9981.637 - 10032.049: 79.4488% ( 121) 00:07:55.165 10032.049 - 10082.462: 80.2445% ( 110) 00:07:55.165 10082.462 - 10132.874: 81.0764% ( 115) 00:07:55.165 10132.874 - 10183.286: 81.9444% ( 120) 00:07:55.165 10183.286 - 10233.698: 82.5376% ( 82) 00:07:55.165 10233.698 - 10284.111: 83.1019% ( 78) 00:07:55.165 10284.111 - 10334.523: 83.5069% ( 56) 00:07:55.165 10334.523 - 10384.935: 83.8976% ( 54) 00:07:55.165 10384.935 - 10435.348: 84.3605% ( 64) 00:07:55.165 10435.348 - 10485.760: 84.8886% ( 73) 00:07:55.165 10485.760 - 10536.172: 85.3950% ( 70) 00:07:55.165 10536.172 - 10586.585: 85.8362% ( 61) 00:07:55.165 10586.585 - 10636.997: 86.3137% ( 66) 00:07:55.165 10636.997 - 10687.409: 86.7983% ( 67) 00:07:55.165 10687.409 - 10737.822: 87.1528% ( 49) 00:07:55.165 10737.822 - 10788.234: 87.5579% ( 56) 00:07:55.165 10788.234 - 10838.646: 87.8762% ( 44) 00:07:55.165 10838.646 - 10889.058: 88.2161% ( 47) 00:07:55.165 10889.058 - 10939.471: 88.4476% ( 32) 00:07:55.165 10939.471 - 10989.883: 88.6646% ( 30) 00:07:55.165 10989.883 - 11040.295: 88.8455% ( 25) 00:07:55.165 11040.295 - 11090.708: 89.1348% ( 40) 00:07:55.165 11090.708 - 11141.120: 89.5472% ( 57) 00:07:55.165 11141.120 - 11191.532: 89.9884% ( 61) 00:07:55.165 11191.532 - 11241.945: 90.2705% ( 39) 00:07:55.165 11241.945 - 11292.357: 90.4586% ( 26) 00:07:55.165 11292.357 - 11342.769: 90.7046% ( 34) 00:07:55.165 11342.769 - 11393.182: 90.8999% ( 27) 00:07:55.165 11393.182 - 11443.594: 91.0301% ( 18) 00:07:55.165 11443.594 - 11494.006: 91.2905% ( 36) 00:07:55.165 11494.006 - 11544.418: 91.6739% ( 53) 00:07:55.165 11544.418 - 11594.831: 91.8837% ( 29) 00:07:55.165 11594.831 - 11645.243: 92.1152% ( 32) 00:07:55.165 11645.243 - 11695.655: 92.4479% ( 46) 00:07:55.165 11695.655 - 11746.068: 92.8530% ( 56) 00:07:55.165 11746.068 - 11796.480: 93.2436% ( 54) 00:07:55.165 11796.480 - 11846.892: 93.4462% ( 28) 00:07:55.165 11846.892 - 11897.305: 93.7138% ( 37) 00:07:55.165 11897.305 - 11947.717: 93.9815% ( 37) 00:07:55.165 11947.717 - 11998.129: 94.1840% ( 28) 00:07:55.165 11998.129 - 12048.542: 94.4083% ( 31) 00:07:55.165 12048.542 - 12098.954: 94.6253% ( 30) 00:07:55.165 12098.954 - 12149.366: 94.7917% ( 23) 00:07:55.165 12149.366 - 12199.778: 94.9797% ( 26) 00:07:55.165 12199.778 - 12250.191: 95.1317% ( 21) 00:07:55.165 12250.191 - 12300.603: 95.2763% ( 20) 00:07:55.165 12300.603 - 12351.015: 95.4138% ( 19) 00:07:55.165 12351.015 - 12401.428: 95.5584% ( 20) 00:07:55.165 12401.428 - 12451.840: 95.6670% ( 15) 00:07:55.165 12451.840 - 12502.252: 95.7972% ( 18) 00:07:55.165 12502.252 - 12552.665: 95.8984% ( 14) 00:07:55.165 12552.665 - 12603.077: 95.9780% ( 11) 00:07:55.165 12603.077 - 12653.489: 96.0431% ( 9) 00:07:55.165 12653.489 - 12703.902: 96.0865% ( 6) 00:07:55.165 12703.902 - 12754.314: 96.1227% ( 5) 00:07:55.165 12754.314 - 12804.726: 96.1444% ( 3) 00:07:55.165 12804.726 - 12855.138: 96.1878% ( 6) 00:07:55.165 12855.138 - 12905.551: 96.1950% ( 1) 00:07:55.165 12905.551 - 13006.375: 96.2457% ( 7) 00:07:55.165 13006.375 - 13107.200: 96.2891% ( 6) 00:07:55.165 13107.200 - 13208.025: 96.3614% ( 10) 00:07:55.165 13208.025 - 13308.849: 96.4844% ( 17) 00:07:55.165 13308.849 - 13409.674: 96.5422% ( 8) 00:07:55.165 13409.674 - 13510.498: 96.5639% ( 3) 00:07:55.165 13510.498 - 13611.323: 96.5856% ( 3) 00:07:55.165 13611.323 - 13712.148: 96.6146% ( 4) 00:07:55.165 13712.148 - 13812.972: 96.6508% ( 5) 00:07:55.165 13812.972 - 13913.797: 96.8244% ( 24) 00:07:55.165 13913.797 - 14014.622: 97.0197% ( 27) 00:07:55.165 14014.622 - 14115.446: 97.1716% ( 21) 00:07:55.165 14115.446 - 14216.271: 97.2946% ( 17) 00:07:55.165 14216.271 - 14317.095: 97.3886% ( 13) 00:07:55.165 14317.095 - 14417.920: 97.4826% ( 13) 00:07:55.165 14417.920 - 14518.745: 97.6201% ( 19) 00:07:55.165 14518.745 - 14619.569: 97.7358% ( 16) 00:07:55.165 14619.569 - 14720.394: 97.9022% ( 23) 00:07:55.165 14720.394 - 14821.218: 98.0830% ( 25) 00:07:55.165 14821.218 - 14922.043: 98.2205% ( 19) 00:07:55.165 14922.043 - 15022.868: 98.3435% ( 17) 00:07:55.165 15022.868 - 15123.692: 98.4303% ( 12) 00:07:55.165 15123.692 - 15224.517: 98.5026% ( 10) 00:07:55.165 15224.517 - 15325.342: 98.6256% ( 17) 00:07:55.165 15325.342 - 15426.166: 98.6907% ( 9) 00:07:55.165 15426.166 - 15526.991: 98.7486% ( 8) 00:07:55.165 15526.991 - 15627.815: 98.8064% ( 8) 00:07:55.165 15627.815 - 15728.640: 98.9077% ( 14) 00:07:55.165 15728.640 - 15829.465: 98.9656% ( 8) 00:07:55.165 15829.465 - 15930.289: 98.9873% ( 3) 00:07:55.165 16031.114 - 16131.938: 99.0090% ( 3) 00:07:55.165 16131.938 - 16232.763: 99.0307% ( 3) 00:07:55.165 16232.763 - 16333.588: 99.0524% ( 3) 00:07:55.165 16333.588 - 16434.412: 99.0741% ( 3) 00:07:55.165 21072.345 - 21173.169: 99.0813% ( 1) 00:07:55.165 21173.169 - 21273.994: 99.1030% ( 3) 00:07:55.165 21273.994 - 21374.818: 99.1392% ( 5) 00:07:55.165 21374.818 - 21475.643: 99.1609% ( 3) 00:07:55.165 21475.643 - 21576.468: 99.1970% ( 5) 00:07:55.165 21576.468 - 21677.292: 99.2332% ( 5) 00:07:55.165 21677.292 - 21778.117: 99.2622% ( 4) 00:07:55.165 21778.117 - 21878.942: 99.3056% ( 6) 00:07:55.165 21878.942 - 21979.766: 99.3273% ( 3) 00:07:55.165 21979.766 - 22080.591: 99.3634% ( 5) 00:07:55.165 22080.591 - 22181.415: 99.3851% ( 3) 00:07:55.165 22181.415 - 22282.240: 99.4430% ( 8) 00:07:55.165 22282.240 - 22383.065: 99.4647% ( 3) 00:07:55.165 22383.065 - 22483.889: 99.5009% ( 5) 00:07:55.165 22483.889 - 22584.714: 99.5298% ( 4) 00:07:55.165 22584.714 - 22685.538: 99.5370% ( 1) 00:07:55.165 28230.892 - 28432.542: 99.5660% ( 4) 00:07:55.165 28432.542 - 28634.191: 99.6166% ( 7) 00:07:55.165 28634.191 - 28835.840: 99.6745% ( 8) 00:07:55.165 28835.840 - 29037.489: 99.7106% ( 5) 00:07:55.165 29037.489 - 29239.138: 99.7685% ( 8) 00:07:55.165 29239.138 - 29440.788: 99.8264% ( 8) 00:07:55.165 29440.788 - 29642.437: 99.8770% ( 7) 00:07:55.165 29642.437 - 29844.086: 99.9349% ( 8) 00:07:55.165 29844.086 - 30045.735: 99.9855% ( 7) 00:07:55.165 30045.735 - 30247.385: 100.0000% ( 2) 00:07:55.165 00:07:55.165 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:55.165 ============================================================================== 00:07:55.165 Range in us Cumulative IO count 00:07:55.165 7208.960 - 7259.372: 0.0145% ( 2) 00:07:55.165 7259.372 - 7309.785: 0.0868% ( 10) 00:07:55.165 7309.785 - 7360.197: 0.1664% ( 11) 00:07:55.165 7360.197 - 7410.609: 0.2749% ( 15) 00:07:55.165 7410.609 - 7461.022: 0.3979% ( 17) 00:07:55.165 7461.022 - 7511.434: 0.5425% ( 20) 00:07:55.165 7511.434 - 7561.846: 0.8464% ( 42) 00:07:55.165 7561.846 - 7612.258: 1.2370% ( 54) 00:07:55.165 7612.258 - 7662.671: 1.6348% ( 55) 00:07:55.165 7662.671 - 7713.083: 2.3438% ( 98) 00:07:55.165 7713.083 - 7763.495: 3.2914% ( 131) 00:07:55.165 7763.495 - 7813.908: 4.5501% ( 174) 00:07:55.165 7813.908 - 7864.320: 6.1560% ( 222) 00:07:55.165 7864.320 - 7914.732: 8.0367% ( 260) 00:07:55.165 7914.732 - 7965.145: 10.6771% ( 365) 00:07:55.165 7965.145 - 8015.557: 13.3319% ( 367) 00:07:55.165 8015.557 - 8065.969: 16.3050% ( 411) 00:07:55.165 8065.969 - 8116.382: 19.3721% ( 424) 00:07:55.165 8116.382 - 8166.794: 22.5477% ( 439) 00:07:55.165 8166.794 - 8217.206: 25.5136% ( 410) 00:07:55.165 8217.206 - 8267.618: 28.4578% ( 407) 00:07:55.165 8267.618 - 8318.031: 31.2066% ( 380) 00:07:55.165 8318.031 - 8368.443: 33.8759% ( 369) 00:07:55.165 8368.443 - 8418.855: 36.1400% ( 313) 00:07:55.165 8418.855 - 8469.268: 38.5417% ( 332) 00:07:55.165 8469.268 - 8519.680: 40.5599% ( 279) 00:07:55.165 8519.680 - 8570.092: 42.4624% ( 263) 00:07:55.165 8570.092 - 8620.505: 44.2564% ( 248) 00:07:55.165 8620.505 - 8670.917: 46.1299% ( 259) 00:07:55.165 8670.917 - 8721.329: 47.9601% ( 253) 00:07:55.165 8721.329 - 8771.742: 49.5081% ( 214) 00:07:55.165 8771.742 - 8822.154: 51.1357% ( 225) 00:07:55.165 8822.154 - 8872.566: 52.5246% ( 192) 00:07:55.165 8872.566 - 8922.978: 53.9497% ( 197) 00:07:55.165 8922.978 - 8973.391: 55.3530% ( 194) 00:07:55.166 8973.391 - 9023.803: 56.7347% ( 191) 00:07:55.166 9023.803 - 9074.215: 58.2176% ( 205) 00:07:55.166 9074.215 - 9124.628: 59.4763% ( 174) 00:07:55.166 9124.628 - 9175.040: 60.7350% ( 174) 00:07:55.166 9175.040 - 9225.452: 62.0804% ( 186) 00:07:55.166 9225.452 - 9275.865: 63.2812% ( 166) 00:07:55.166 9275.865 - 9326.277: 64.7135% ( 198) 00:07:55.166 9326.277 - 9376.689: 66.2688% ( 215) 00:07:55.166 9376.689 - 9427.102: 67.6360% ( 189) 00:07:55.166 9427.102 - 9477.514: 68.7645% ( 156) 00:07:55.166 9477.514 - 9527.926: 70.1389% ( 190) 00:07:55.166 9527.926 - 9578.338: 71.5061% ( 189) 00:07:55.166 9578.338 - 9628.751: 72.6780% ( 162) 00:07:55.166 9628.751 - 9679.163: 73.9149% ( 171) 00:07:55.166 9679.163 - 9729.575: 75.0651% ( 159) 00:07:55.166 9729.575 - 9779.988: 76.3527% ( 178) 00:07:55.166 9779.988 - 9830.400: 77.4378% ( 150) 00:07:55.166 9830.400 - 9880.812: 78.6169% ( 163) 00:07:55.166 9880.812 - 9931.225: 79.6730% ( 146) 00:07:55.166 9931.225 - 9981.637: 80.5194% ( 117) 00:07:55.166 9981.637 - 10032.049: 81.3657% ( 117) 00:07:55.166 10032.049 - 10082.462: 82.0747% ( 98) 00:07:55.166 10082.462 - 10132.874: 82.8487% ( 107) 00:07:55.166 10132.874 - 10183.286: 83.4852% ( 88) 00:07:55.166 10183.286 - 10233.698: 84.0350% ( 76) 00:07:55.166 10233.698 - 10284.111: 84.4546% ( 58) 00:07:55.166 10284.111 - 10334.523: 84.8235% ( 51) 00:07:55.166 10334.523 - 10384.935: 85.1562% ( 46) 00:07:55.166 10384.935 - 10435.348: 85.5179% ( 50) 00:07:55.166 10435.348 - 10485.760: 85.7277% ( 29) 00:07:55.166 10485.760 - 10536.172: 86.0460% ( 44) 00:07:55.166 10536.172 - 10586.585: 86.2486% ( 28) 00:07:55.166 10586.585 - 10636.997: 86.4511% ( 28) 00:07:55.166 10636.997 - 10687.409: 86.9213% ( 65) 00:07:55.166 10687.409 - 10737.822: 87.1817% ( 36) 00:07:55.166 10737.822 - 10788.234: 87.4277% ( 34) 00:07:55.166 10788.234 - 10838.646: 87.6881% ( 36) 00:07:55.166 10838.646 - 10889.058: 87.9123% ( 31) 00:07:55.166 10889.058 - 10939.471: 88.1800% ( 37) 00:07:55.166 10939.471 - 10989.883: 88.4766% ( 41) 00:07:55.166 10989.883 - 11040.295: 88.7514% ( 38) 00:07:55.166 11040.295 - 11090.708: 89.0625% ( 43) 00:07:55.166 11090.708 - 11141.120: 89.4676% ( 56) 00:07:55.166 11141.120 - 11191.532: 89.7208% ( 35) 00:07:55.166 11191.532 - 11241.945: 89.9957% ( 38) 00:07:55.166 11241.945 - 11292.357: 90.2633% ( 37) 00:07:55.166 11292.357 - 11342.769: 90.5599% ( 41) 00:07:55.166 11342.769 - 11393.182: 90.8348% ( 38) 00:07:55.166 11393.182 - 11443.594: 91.1531% ( 44) 00:07:55.166 11443.594 - 11494.006: 91.4352% ( 39) 00:07:55.166 11494.006 - 11544.418: 91.6884% ( 35) 00:07:55.166 11544.418 - 11594.831: 92.0645% ( 52) 00:07:55.166 11594.831 - 11645.243: 92.3756% ( 43) 00:07:55.166 11645.243 - 11695.655: 92.6360% ( 36) 00:07:55.166 11695.655 - 11746.068: 93.0266% ( 54) 00:07:55.166 11746.068 - 11796.480: 93.2219% ( 27) 00:07:55.166 11796.480 - 11846.892: 93.4317% ( 29) 00:07:55.166 11846.892 - 11897.305: 93.5981% ( 23) 00:07:55.166 11897.305 - 11947.717: 93.7283% ( 18) 00:07:55.166 11947.717 - 11998.129: 93.8802% ( 21) 00:07:55.166 11998.129 - 12048.542: 94.0755% ( 27) 00:07:55.166 12048.542 - 12098.954: 94.2636% ( 26) 00:07:55.166 12098.954 - 12149.366: 94.4589% ( 27) 00:07:55.166 12149.366 - 12199.778: 94.6542% ( 27) 00:07:55.166 12199.778 - 12250.191: 94.8134% ( 22) 00:07:55.166 12250.191 - 12300.603: 94.9725% ( 22) 00:07:55.166 12300.603 - 12351.015: 95.1461% ( 24) 00:07:55.166 12351.015 - 12401.428: 95.2763% ( 18) 00:07:55.166 12401.428 - 12451.840: 95.3993% ( 17) 00:07:55.166 12451.840 - 12502.252: 95.5223% ( 17) 00:07:55.166 12502.252 - 12552.665: 95.7682% ( 34) 00:07:55.166 12552.665 - 12603.077: 95.9925% ( 31) 00:07:55.166 12603.077 - 12653.489: 96.0938% ( 14) 00:07:55.166 12653.489 - 12703.902: 96.2746% ( 25) 00:07:55.166 12703.902 - 12754.314: 96.4554% ( 25) 00:07:55.166 12754.314 - 12804.726: 96.5495% ( 13) 00:07:55.166 12804.726 - 12855.138: 96.6218% ( 10) 00:07:55.166 12855.138 - 12905.551: 96.6652% ( 6) 00:07:55.166 12905.551 - 13006.375: 96.7086% ( 6) 00:07:55.166 13006.375 - 13107.200: 96.7448% ( 5) 00:07:55.166 13107.200 - 13208.025: 96.7593% ( 2) 00:07:55.166 13308.849 - 13409.674: 96.8605% ( 14) 00:07:55.166 13409.674 - 13510.498: 96.9401% ( 11) 00:07:55.166 13510.498 - 13611.323: 96.9690% ( 4) 00:07:55.166 13611.323 - 13712.148: 97.0197% ( 7) 00:07:55.166 13712.148 - 13812.972: 97.0992% ( 11) 00:07:55.166 13812.972 - 13913.797: 97.2295% ( 18) 00:07:55.166 13913.797 - 14014.622: 97.3235% ( 13) 00:07:55.166 14014.622 - 14115.446: 97.4320% ( 15) 00:07:55.166 14115.446 - 14216.271: 97.5405% ( 15) 00:07:55.166 14216.271 - 14317.095: 97.6562% ( 16) 00:07:55.166 14317.095 - 14417.920: 97.7720% ( 16) 00:07:55.166 14417.920 - 14518.745: 97.8877% ( 16) 00:07:55.166 14518.745 - 14619.569: 98.0035% ( 16) 00:07:55.166 14619.569 - 14720.394: 98.0613% ( 8) 00:07:55.166 14720.394 - 14821.218: 98.1047% ( 6) 00:07:55.166 14821.218 - 14922.043: 98.1698% ( 9) 00:07:55.166 14922.043 - 15022.868: 98.2350% ( 9) 00:07:55.166 15022.868 - 15123.692: 98.3145% ( 11) 00:07:55.166 15123.692 - 15224.517: 98.4954% ( 25) 00:07:55.166 15224.517 - 15325.342: 98.5749% ( 11) 00:07:55.166 15325.342 - 15426.166: 98.6111% ( 5) 00:07:55.166 15627.815 - 15728.640: 98.6328% ( 3) 00:07:55.166 15728.640 - 15829.465: 98.6400% ( 1) 00:07:55.166 15829.465 - 15930.289: 98.6834% ( 6) 00:07:55.166 15930.289 - 16031.114: 98.7847% ( 14) 00:07:55.166 16031.114 - 16131.938: 98.9149% ( 18) 00:07:55.166 16131.938 - 16232.763: 99.0162% ( 14) 00:07:55.166 16232.763 - 16333.588: 99.0668% ( 7) 00:07:55.166 16333.588 - 16434.412: 99.0741% ( 1) 00:07:55.166 21273.994 - 21374.818: 99.0885% ( 2) 00:07:55.166 21374.818 - 21475.643: 99.1319% ( 6) 00:07:55.166 21475.643 - 21576.468: 99.1609% ( 4) 00:07:55.166 21576.468 - 21677.292: 99.2043% ( 6) 00:07:55.166 21677.292 - 21778.117: 99.2405% ( 5) 00:07:55.166 21778.117 - 21878.942: 99.2694% ( 4) 00:07:55.166 21878.942 - 21979.766: 99.3056% ( 5) 00:07:55.166 21979.766 - 22080.591: 99.3417% ( 5) 00:07:55.166 22080.591 - 22181.415: 99.3851% ( 6) 00:07:55.166 22181.415 - 22282.240: 99.4141% ( 4) 00:07:55.166 22282.240 - 22383.065: 99.4575% ( 6) 00:07:55.166 22383.065 - 22483.889: 99.4936% ( 5) 00:07:55.166 22483.889 - 22584.714: 99.5298% ( 5) 00:07:55.166 22584.714 - 22685.538: 99.5370% ( 1) 00:07:55.166 28029.243 - 28230.892: 99.5804% ( 6) 00:07:55.166 28230.892 - 28432.542: 99.6383% ( 8) 00:07:55.166 28432.542 - 28634.191: 99.6962% ( 8) 00:07:55.166 28634.191 - 28835.840: 99.7541% ( 8) 00:07:55.166 28835.840 - 29037.489: 99.8192% ( 9) 00:07:55.166 29037.489 - 29239.138: 99.8843% ( 9) 00:07:55.166 29239.138 - 29440.788: 99.9421% ( 8) 00:07:55.166 29440.788 - 29642.437: 100.0000% ( 8) 00:07:55.166 00:07:55.166 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:55.166 ============================================================================== 00:07:55.166 Range in us Cumulative IO count 00:07:55.166 7309.785 - 7360.197: 0.0289% ( 4) 00:07:55.166 7360.197 - 7410.609: 0.0940% ( 9) 00:07:55.166 7410.609 - 7461.022: 0.1591% ( 9) 00:07:55.166 7461.022 - 7511.434: 0.2749% ( 16) 00:07:55.166 7511.434 - 7561.846: 0.4991% ( 31) 00:07:55.166 7561.846 - 7612.258: 0.9042% ( 56) 00:07:55.166 7612.258 - 7662.671: 1.6059% ( 97) 00:07:55.166 7662.671 - 7713.083: 2.4957% ( 123) 00:07:55.166 7713.083 - 7763.495: 3.8267% ( 184) 00:07:55.166 7763.495 - 7813.908: 5.3602% ( 212) 00:07:55.166 7813.908 - 7864.320: 7.3640% ( 277) 00:07:55.166 7864.320 - 7914.732: 9.3461% ( 274) 00:07:55.166 7914.732 - 7965.145: 11.5017% ( 298) 00:07:55.166 7965.145 - 8015.557: 14.0046% ( 346) 00:07:55.166 8015.557 - 8065.969: 16.7318% ( 377) 00:07:55.166 8065.969 - 8116.382: 19.5312% ( 387) 00:07:55.166 8116.382 - 8166.794: 22.2295% ( 373) 00:07:55.166 8166.794 - 8217.206: 25.1013% ( 397) 00:07:55.166 8217.206 - 8267.618: 27.9008% ( 387) 00:07:55.166 8267.618 - 8318.031: 30.7075% ( 388) 00:07:55.166 8318.031 - 8368.443: 33.4274% ( 376) 00:07:55.166 8368.443 - 8418.855: 35.8579% ( 336) 00:07:55.166 8418.855 - 8469.268: 38.2668% ( 333) 00:07:55.166 8469.268 - 8519.680: 40.2633% ( 276) 00:07:55.166 8519.680 - 8570.092: 42.3828% ( 293) 00:07:55.166 8570.092 - 8620.505: 44.1406% ( 243) 00:07:55.166 8620.505 - 8670.917: 45.8767% ( 240) 00:07:55.166 8670.917 - 8721.329: 47.4754% ( 221) 00:07:55.166 8721.329 - 8771.742: 48.8788% ( 194) 00:07:55.166 8771.742 - 8822.154: 50.5064% ( 225) 00:07:55.166 8822.154 - 8872.566: 52.1340% ( 225) 00:07:55.166 8872.566 - 8922.978: 53.8484% ( 237) 00:07:55.166 8922.978 - 8973.391: 55.2590% ( 195) 00:07:55.166 8973.391 - 9023.803: 56.8215% ( 216) 00:07:55.166 9023.803 - 9074.215: 58.4491% ( 225) 00:07:55.166 9074.215 - 9124.628: 59.7946% ( 186) 00:07:55.166 9124.628 - 9175.040: 61.1039% ( 181) 00:07:55.166 9175.040 - 9225.452: 62.5362% ( 198) 00:07:55.166 9225.452 - 9275.865: 64.1565% ( 224) 00:07:55.166 9275.865 - 9326.277: 65.6105% ( 201) 00:07:55.166 9326.277 - 9376.689: 67.3683% ( 243) 00:07:55.166 9376.689 - 9427.102: 68.4896% ( 155) 00:07:55.166 9427.102 - 9477.514: 69.8134% ( 183) 00:07:55.166 9477.514 - 9527.926: 70.9925% ( 163) 00:07:55.166 9527.926 - 9578.338: 72.1861% ( 165) 00:07:55.166 9578.338 - 9628.751: 73.3869% ( 166) 00:07:55.166 9628.751 - 9679.163: 74.5370% ( 159) 00:07:55.166 9679.163 - 9729.575: 75.6293% ( 151) 00:07:55.166 9729.575 - 9779.988: 76.7795% ( 159) 00:07:55.166 9779.988 - 9830.400: 77.9225% ( 158) 00:07:55.166 9830.400 - 9880.812: 79.1016% ( 163) 00:07:55.166 9880.812 - 9931.225: 80.0275% ( 128) 00:07:55.166 9931.225 - 9981.637: 80.9317% ( 125) 00:07:55.166 9981.637 - 10032.049: 81.7853% ( 118) 00:07:55.166 10032.049 - 10082.462: 82.5159% ( 101) 00:07:55.166 10082.462 - 10132.874: 83.4708% ( 132) 00:07:55.166 10132.874 - 10183.286: 84.2593% ( 109) 00:07:55.166 10183.286 - 10233.698: 84.9754% ( 99) 00:07:55.166 10233.698 - 10284.111: 85.5686% ( 82) 00:07:55.166 10284.111 - 10334.523: 86.0388% ( 65) 00:07:55.167 10334.523 - 10384.935: 86.4149% ( 52) 00:07:55.167 10384.935 - 10435.348: 86.7839% ( 51) 00:07:55.167 10435.348 - 10485.760: 87.1383% ( 49) 00:07:55.167 10485.760 - 10536.172: 87.4349% ( 41) 00:07:55.167 10536.172 - 10586.585: 87.6519% ( 30) 00:07:55.167 10586.585 - 10636.997: 87.8979% ( 34) 00:07:55.167 10636.997 - 10687.409: 88.0281% ( 18) 00:07:55.167 10687.409 - 10737.822: 88.1944% ( 23) 00:07:55.167 10737.822 - 10788.234: 88.3681% ( 24) 00:07:55.167 10788.234 - 10838.646: 88.5127% ( 20) 00:07:55.167 10838.646 - 10889.058: 88.6502% ( 19) 00:07:55.167 10889.058 - 10939.471: 88.7948% ( 20) 00:07:55.167 10939.471 - 10989.883: 88.9468% ( 21) 00:07:55.167 10989.883 - 11040.295: 89.0480% ( 14) 00:07:55.167 11040.295 - 11090.708: 89.1565% ( 15) 00:07:55.167 11090.708 - 11141.120: 89.4459% ( 40) 00:07:55.167 11141.120 - 11191.532: 89.6267% ( 25) 00:07:55.167 11191.532 - 11241.945: 89.8076% ( 25) 00:07:55.167 11241.945 - 11292.357: 90.0391% ( 32) 00:07:55.167 11292.357 - 11342.769: 90.3284% ( 40) 00:07:55.167 11342.769 - 11393.182: 90.6105% ( 39) 00:07:55.167 11393.182 - 11443.594: 90.9939% ( 53) 00:07:55.167 11443.594 - 11494.006: 91.2616% ( 37) 00:07:55.167 11494.006 - 11544.418: 91.5365% ( 38) 00:07:55.167 11544.418 - 11594.831: 91.7969% ( 36) 00:07:55.167 11594.831 - 11645.243: 92.0428% ( 34) 00:07:55.167 11645.243 - 11695.655: 92.2164% ( 24) 00:07:55.167 11695.655 - 11746.068: 92.3900% ( 24) 00:07:55.167 11746.068 - 11796.480: 92.6649% ( 38) 00:07:55.167 11796.480 - 11846.892: 92.8819% ( 30) 00:07:55.167 11846.892 - 11897.305: 93.1062% ( 31) 00:07:55.167 11897.305 - 11947.717: 93.2870% ( 25) 00:07:55.167 11947.717 - 11998.129: 93.4896% ( 28) 00:07:55.167 11998.129 - 12048.542: 93.6560% ( 23) 00:07:55.167 12048.542 - 12098.954: 93.8513% ( 27) 00:07:55.167 12098.954 - 12149.366: 94.0104% ( 22) 00:07:55.167 12149.366 - 12199.778: 94.1334% ( 17) 00:07:55.167 12199.778 - 12250.191: 94.2347% ( 14) 00:07:55.167 12250.191 - 12300.603: 94.3793% ( 20) 00:07:55.167 12300.603 - 12351.015: 94.5168% ( 19) 00:07:55.167 12351.015 - 12401.428: 94.6470% ( 18) 00:07:55.167 12401.428 - 12451.840: 94.8278% ( 25) 00:07:55.167 12451.840 - 12502.252: 95.0448% ( 30) 00:07:55.167 12502.252 - 12552.665: 95.2112% ( 23) 00:07:55.167 12552.665 - 12603.077: 95.3270% ( 16) 00:07:55.167 12603.077 - 12653.489: 95.4138% ( 12) 00:07:55.167 12653.489 - 12703.902: 95.4789% ( 9) 00:07:55.167 12703.902 - 12754.314: 95.5657% ( 12) 00:07:55.167 12754.314 - 12804.726: 95.6597% ( 13) 00:07:55.167 12804.726 - 12855.138: 95.7393% ( 11) 00:07:55.167 12855.138 - 12905.551: 95.8116% ( 10) 00:07:55.167 12905.551 - 13006.375: 96.0286% ( 30) 00:07:55.167 13006.375 - 13107.200: 96.3180% ( 40) 00:07:55.167 13107.200 - 13208.025: 96.4627% ( 20) 00:07:55.167 13208.025 - 13308.849: 96.5929% ( 18) 00:07:55.167 13308.849 - 13409.674: 96.7231% ( 18) 00:07:55.167 13409.674 - 13510.498: 96.8750% ( 21) 00:07:55.167 13510.498 - 13611.323: 97.0269% ( 21) 00:07:55.167 13611.323 - 13712.148: 97.1716% ( 20) 00:07:55.167 13712.148 - 13812.972: 97.2946% ( 17) 00:07:55.167 13812.972 - 13913.797: 97.4826% ( 26) 00:07:55.167 13913.797 - 14014.622: 97.5911% ( 15) 00:07:55.167 14014.622 - 14115.446: 97.7069% ( 16) 00:07:55.167 14115.446 - 14216.271: 97.8082% ( 14) 00:07:55.167 14216.271 - 14317.095: 97.9022% ( 13) 00:07:55.167 14317.095 - 14417.920: 98.0035% ( 14) 00:07:55.167 14417.920 - 14518.745: 98.0830% ( 11) 00:07:55.167 14518.745 - 14619.569: 98.1916% ( 15) 00:07:55.167 14619.569 - 14720.394: 98.4086% ( 30) 00:07:55.167 14720.394 - 14821.218: 98.5388% ( 18) 00:07:55.167 14821.218 - 14922.043: 98.6039% ( 9) 00:07:55.167 14922.043 - 15022.868: 98.6111% ( 1) 00:07:55.167 15930.289 - 16031.114: 98.6400% ( 4) 00:07:55.167 16031.114 - 16131.938: 98.7052% ( 9) 00:07:55.167 16131.938 - 16232.763: 98.9222% ( 30) 00:07:55.167 16232.763 - 16333.588: 98.9873% ( 9) 00:07:55.167 16333.588 - 16434.412: 99.0307% ( 6) 00:07:55.167 16434.412 - 16535.237: 99.0741% ( 6) 00:07:55.167 21374.818 - 21475.643: 99.0813% ( 1) 00:07:55.167 21475.643 - 21576.468: 99.1247% ( 6) 00:07:55.167 21576.468 - 21677.292: 99.1609% ( 5) 00:07:55.167 21677.292 - 21778.117: 99.1970% ( 5) 00:07:55.167 21778.117 - 21878.942: 99.2332% ( 5) 00:07:55.167 21878.942 - 21979.766: 99.2694% ( 5) 00:07:55.167 21979.766 - 22080.591: 99.3056% ( 5) 00:07:55.167 22080.591 - 22181.415: 99.3345% ( 4) 00:07:55.167 22181.415 - 22282.240: 99.3634% ( 4) 00:07:55.167 22282.240 - 22383.065: 99.3996% ( 5) 00:07:55.167 22383.065 - 22483.889: 99.4285% ( 4) 00:07:55.167 22483.889 - 22584.714: 99.4430% ( 2) 00:07:55.167 22584.714 - 22685.538: 99.4719% ( 4) 00:07:55.167 22685.538 - 22786.363: 99.5081% ( 5) 00:07:55.167 22786.363 - 22887.188: 99.5298% ( 3) 00:07:55.167 22887.188 - 22988.012: 99.5370% ( 1) 00:07:55.167 28230.892 - 28432.542: 99.5804% ( 6) 00:07:55.167 28432.542 - 28634.191: 99.6383% ( 8) 00:07:55.167 28634.191 - 28835.840: 99.7034% ( 9) 00:07:55.167 28835.840 - 29037.489: 99.7613% ( 8) 00:07:55.167 29037.489 - 29239.138: 99.8264% ( 9) 00:07:55.167 29239.138 - 29440.788: 99.8915% ( 9) 00:07:55.167 29440.788 - 29642.437: 99.9566% ( 9) 00:07:55.167 29642.437 - 29844.086: 100.0000% ( 6) 00:07:55.167 00:07:55.167 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:55.167 ============================================================================== 00:07:55.167 Range in us Cumulative IO count 00:07:55.167 7309.785 - 7360.197: 0.0289% ( 4) 00:07:55.167 7360.197 - 7410.609: 0.0940% ( 9) 00:07:55.167 7410.609 - 7461.022: 0.1736% ( 11) 00:07:55.167 7461.022 - 7511.434: 0.2459% ( 10) 00:07:55.167 7511.434 - 7561.846: 0.3834% ( 19) 00:07:55.167 7561.846 - 7612.258: 0.8464% ( 64) 00:07:55.167 7612.258 - 7662.671: 1.6638% ( 113) 00:07:55.167 7662.671 - 7713.083: 2.4016% ( 102) 00:07:55.167 7713.083 - 7763.495: 3.1684% ( 106) 00:07:55.167 7763.495 - 7813.908: 4.5573% ( 192) 00:07:55.167 7813.908 - 7864.320: 6.3802% ( 252) 00:07:55.167 7864.320 - 7914.732: 8.8759% ( 345) 00:07:55.167 7914.732 - 7965.145: 11.0315% ( 298) 00:07:55.167 7965.145 - 8015.557: 13.6212% ( 358) 00:07:55.167 8015.557 - 8065.969: 16.8330% ( 444) 00:07:55.167 8065.969 - 8116.382: 20.2402% ( 471) 00:07:55.167 8116.382 - 8166.794: 23.5098% ( 452) 00:07:55.167 8166.794 - 8217.206: 26.0127% ( 346) 00:07:55.167 8217.206 - 8267.618: 28.9569% ( 407) 00:07:55.167 8267.618 - 8318.031: 31.6696% ( 375) 00:07:55.167 8318.031 - 8368.443: 33.9120% ( 310) 00:07:55.167 8368.443 - 8418.855: 36.3137% ( 332) 00:07:55.167 8418.855 - 8469.268: 38.5706% ( 312) 00:07:55.167 8469.268 - 8519.680: 40.6250% ( 284) 00:07:55.167 8519.680 - 8570.092: 43.1207% ( 345) 00:07:55.167 8570.092 - 8620.505: 45.0593% ( 268) 00:07:55.167 8620.505 - 8670.917: 46.6073% ( 214) 00:07:55.167 8670.917 - 8721.329: 48.1916% ( 219) 00:07:55.167 8721.329 - 8771.742: 49.8698% ( 232) 00:07:55.167 8771.742 - 8822.154: 51.5553% ( 233) 00:07:55.167 8822.154 - 8872.566: 53.1756% ( 224) 00:07:55.167 8872.566 - 8922.978: 54.7743% ( 221) 00:07:55.167 8922.978 - 8973.391: 56.1343% ( 188) 00:07:55.167 8973.391 - 9023.803: 57.4508% ( 182) 00:07:55.167 9023.803 - 9074.215: 58.7891% ( 185) 00:07:55.167 9074.215 - 9124.628: 60.0188% ( 170) 00:07:55.167 9124.628 - 9175.040: 61.2630% ( 172) 00:07:55.167 9175.040 - 9225.452: 62.4638% ( 166) 00:07:55.167 9225.452 - 9275.865: 63.6646% ( 166) 00:07:55.167 9275.865 - 9326.277: 65.0391% ( 190) 00:07:55.167 9326.277 - 9376.689: 66.0807% ( 144) 00:07:55.167 9376.689 - 9427.102: 67.1947% ( 154) 00:07:55.167 9427.102 - 9477.514: 68.5692% ( 190) 00:07:55.167 9477.514 - 9527.926: 69.7772% ( 167) 00:07:55.167 9527.926 - 9578.338: 70.7899% ( 140) 00:07:55.167 9578.338 - 9628.751: 72.1571% ( 189) 00:07:55.167 9628.751 - 9679.163: 73.2277% ( 148) 00:07:55.167 9679.163 - 9729.575: 74.4792% ( 173) 00:07:55.167 9729.575 - 9779.988: 75.7234% ( 172) 00:07:55.167 9779.988 - 9830.400: 76.9748% ( 173) 00:07:55.167 9830.400 - 9880.812: 78.1829% ( 167) 00:07:55.167 9880.812 - 9931.225: 79.3837% ( 166) 00:07:55.167 9931.225 - 9981.637: 80.3241% ( 130) 00:07:55.167 9981.637 - 10032.049: 81.3585% ( 143) 00:07:55.167 10032.049 - 10082.462: 82.2483% ( 123) 00:07:55.167 10082.462 - 10132.874: 83.3550% ( 153) 00:07:55.167 10132.874 - 10183.286: 84.3244% ( 134) 00:07:55.167 10183.286 - 10233.698: 85.2937% ( 134) 00:07:55.167 10233.698 - 10284.111: 85.8652% ( 79) 00:07:55.167 10284.111 - 10334.523: 86.3571% ( 68) 00:07:55.167 10334.523 - 10384.935: 86.8056% ( 62) 00:07:55.167 10384.935 - 10435.348: 87.2323% ( 59) 00:07:55.167 10435.348 - 10485.760: 87.4711% ( 33) 00:07:55.167 10485.760 - 10536.172: 87.6519% ( 25) 00:07:55.167 10536.172 - 10586.585: 87.8400% ( 26) 00:07:55.167 10586.585 - 10636.997: 88.0136% ( 24) 00:07:55.167 10636.997 - 10687.409: 88.1872% ( 24) 00:07:55.167 10687.409 - 10737.822: 88.3825% ( 27) 00:07:55.167 10737.822 - 10788.234: 88.6646% ( 39) 00:07:55.167 10788.234 - 10838.646: 88.8672% ( 28) 00:07:55.167 10838.646 - 10889.058: 89.0408% ( 24) 00:07:55.167 10889.058 - 10939.471: 89.2361% ( 27) 00:07:55.167 10939.471 - 10989.883: 89.4242% ( 26) 00:07:55.167 10989.883 - 11040.295: 89.6267% ( 28) 00:07:55.167 11040.295 - 11090.708: 89.7859% ( 22) 00:07:55.167 11090.708 - 11141.120: 89.9233% ( 19) 00:07:55.167 11141.120 - 11191.532: 90.0535% ( 18) 00:07:55.167 11191.532 - 11241.945: 90.2127% ( 22) 00:07:55.167 11241.945 - 11292.357: 90.4442% ( 32) 00:07:55.167 11292.357 - 11342.769: 90.6829% ( 33) 00:07:55.167 11342.769 - 11393.182: 90.9867% ( 42) 00:07:55.167 11393.182 - 11443.594: 91.3339% ( 48) 00:07:55.167 11443.594 - 11494.006: 91.5654% ( 32) 00:07:55.167 11494.006 - 11544.418: 91.7752% ( 29) 00:07:55.167 11544.418 - 11594.831: 91.9488% ( 24) 00:07:55.167 11594.831 - 11645.243: 92.1224% ( 24) 00:07:55.167 11645.243 - 11695.655: 92.2526% ( 18) 00:07:55.167 11695.655 - 11746.068: 92.4045% ( 21) 00:07:55.167 11746.068 - 11796.480: 92.6215% ( 30) 00:07:55.167 11796.480 - 11846.892: 92.8168% ( 27) 00:07:55.168 11846.892 - 11897.305: 92.9688% ( 21) 00:07:55.168 11897.305 - 11947.717: 93.1062% ( 19) 00:07:55.168 11947.717 - 11998.129: 93.2726% ( 23) 00:07:55.168 11998.129 - 12048.542: 93.4389% ( 23) 00:07:55.168 12048.542 - 12098.954: 93.7428% ( 42) 00:07:55.168 12098.954 - 12149.366: 94.0321% ( 40) 00:07:55.168 12149.366 - 12199.778: 94.2564% ( 31) 00:07:55.168 12199.778 - 12250.191: 94.4083% ( 21) 00:07:55.168 12250.191 - 12300.603: 94.5095% ( 14) 00:07:55.168 12300.603 - 12351.015: 94.6181% ( 15) 00:07:55.168 12351.015 - 12401.428: 94.7338% ( 16) 00:07:55.168 12401.428 - 12451.840: 94.8640% ( 18) 00:07:55.168 12451.840 - 12502.252: 94.9725% ( 15) 00:07:55.168 12502.252 - 12552.665: 95.0810% ( 15) 00:07:55.168 12552.665 - 12603.077: 95.1823% ( 14) 00:07:55.168 12603.077 - 12653.489: 95.2763% ( 13) 00:07:55.168 12653.489 - 12703.902: 95.3848% ( 15) 00:07:55.168 12703.902 - 12754.314: 95.4861% ( 14) 00:07:55.168 12754.314 - 12804.726: 95.5874% ( 14) 00:07:55.168 12804.726 - 12855.138: 95.6742% ( 12) 00:07:55.168 12855.138 - 12905.551: 95.7972% ( 17) 00:07:55.168 12905.551 - 13006.375: 96.0648% ( 37) 00:07:55.168 13006.375 - 13107.200: 96.1661% ( 14) 00:07:55.168 13107.200 - 13208.025: 96.2963% ( 18) 00:07:55.168 13208.025 - 13308.849: 96.4265% ( 18) 00:07:55.168 13308.849 - 13409.674: 96.5639% ( 19) 00:07:55.168 13409.674 - 13510.498: 96.7810% ( 30) 00:07:55.168 13510.498 - 13611.323: 97.0341% ( 35) 00:07:55.168 13611.323 - 13712.148: 97.2584% ( 31) 00:07:55.168 13712.148 - 13812.972: 97.4682% ( 29) 00:07:55.168 13812.972 - 13913.797: 97.6418% ( 24) 00:07:55.168 13913.797 - 14014.622: 97.8660% ( 31) 00:07:55.168 14014.622 - 14115.446: 97.9890% ( 17) 00:07:55.168 14115.446 - 14216.271: 98.0975% ( 15) 00:07:55.168 14216.271 - 14317.095: 98.1481% ( 7) 00:07:55.168 14417.920 - 14518.745: 98.1554% ( 1) 00:07:55.168 14619.569 - 14720.394: 98.1916% ( 5) 00:07:55.168 14720.394 - 14821.218: 98.2422% ( 7) 00:07:55.168 14821.218 - 14922.043: 98.2784% ( 5) 00:07:55.168 14922.043 - 15022.868: 98.4230% ( 20) 00:07:55.168 15022.868 - 15123.692: 98.5749% ( 21) 00:07:55.168 15123.692 - 15224.517: 98.5822% ( 1) 00:07:55.168 15224.517 - 15325.342: 98.6111% ( 4) 00:07:55.168 15829.465 - 15930.289: 98.6183% ( 1) 00:07:55.168 15930.289 - 16031.114: 98.6400% ( 3) 00:07:55.168 16031.114 - 16131.938: 98.6979% ( 8) 00:07:55.168 16131.938 - 16232.763: 98.9511% ( 35) 00:07:55.168 16232.763 - 16333.588: 99.0017% ( 7) 00:07:55.168 16333.588 - 16434.412: 99.0451% ( 6) 00:07:55.168 16434.412 - 16535.237: 99.0741% ( 4) 00:07:55.168 21475.643 - 21576.468: 99.1102% ( 5) 00:07:55.168 21576.468 - 21677.292: 99.1464% ( 5) 00:07:55.168 21677.292 - 21778.117: 99.1753% ( 4) 00:07:55.168 21778.117 - 21878.942: 99.2188% ( 6) 00:07:55.168 21878.942 - 21979.766: 99.2549% ( 5) 00:07:55.168 21979.766 - 22080.591: 99.2839% ( 4) 00:07:55.168 22080.591 - 22181.415: 99.3200% ( 5) 00:07:55.168 22181.415 - 22282.240: 99.3562% ( 5) 00:07:55.168 22282.240 - 22383.065: 99.3779% ( 3) 00:07:55.168 22383.065 - 22483.889: 99.3996% ( 3) 00:07:55.168 22483.889 - 22584.714: 99.4285% ( 4) 00:07:55.168 22584.714 - 22685.538: 99.4575% ( 4) 00:07:55.168 22685.538 - 22786.363: 99.4864% ( 4) 00:07:55.168 22786.363 - 22887.188: 99.5153% ( 4) 00:07:55.168 22887.188 - 22988.012: 99.5370% ( 3) 00:07:55.168 27625.945 - 27827.594: 99.5660% ( 4) 00:07:55.168 27827.594 - 28029.243: 99.7685% ( 28) 00:07:55.168 28029.243 - 28230.892: 99.8336% ( 9) 00:07:55.168 28230.892 - 28432.542: 99.8843% ( 7) 00:07:55.168 28432.542 - 28634.191: 99.9349% ( 7) 00:07:55.168 28634.191 - 28835.840: 99.9928% ( 8) 00:07:55.168 28835.840 - 29037.489: 100.0000% ( 1) 00:07:55.168 00:07:55.168 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:55.168 ============================================================================== 00:07:55.168 Range in us Cumulative IO count 00:07:55.168 7158.548 - 7208.960: 0.0072% ( 1) 00:07:55.168 7309.785 - 7360.197: 0.0579% ( 7) 00:07:55.168 7360.197 - 7410.609: 0.1447% ( 12) 00:07:55.168 7410.609 - 7461.022: 0.1953% ( 7) 00:07:55.168 7461.022 - 7511.434: 0.3472% ( 21) 00:07:55.168 7511.434 - 7561.846: 0.5570% ( 29) 00:07:55.168 7561.846 - 7612.258: 0.8753% ( 44) 00:07:55.168 7612.258 - 7662.671: 1.5046% ( 87) 00:07:55.168 7662.671 - 7713.083: 2.1918% ( 95) 00:07:55.168 7713.083 - 7763.495: 3.1973% ( 139) 00:07:55.168 7763.495 - 7813.908: 4.5718% ( 190) 00:07:55.168 7813.908 - 7864.320: 6.0185% ( 200) 00:07:55.168 7864.320 - 7914.732: 7.8631% ( 255) 00:07:55.168 7914.732 - 7965.145: 10.4962% ( 364) 00:07:55.168 7965.145 - 8015.557: 13.4838% ( 413) 00:07:55.168 8015.557 - 8065.969: 16.6811% ( 442) 00:07:55.168 8065.969 - 8116.382: 19.7627% ( 426) 00:07:55.168 8116.382 - 8166.794: 23.0252% ( 451) 00:07:55.168 8166.794 - 8217.206: 26.0851% ( 423) 00:07:55.168 8217.206 - 8267.618: 29.3475% ( 451) 00:07:55.168 8267.618 - 8318.031: 32.1181% ( 383) 00:07:55.168 8318.031 - 8368.443: 34.7150% ( 359) 00:07:55.168 8368.443 - 8418.855: 37.2758% ( 354) 00:07:55.168 8418.855 - 8469.268: 39.3012% ( 280) 00:07:55.168 8469.268 - 8519.680: 41.3990% ( 290) 00:07:55.168 8519.680 - 8570.092: 43.5836% ( 302) 00:07:55.168 8570.092 - 8620.505: 45.2836% ( 235) 00:07:55.168 8620.505 - 8670.917: 47.1499% ( 258) 00:07:55.168 8670.917 - 8721.329: 48.4737% ( 183) 00:07:55.168 8721.329 - 8771.742: 50.0000% ( 211) 00:07:55.168 8771.742 - 8822.154: 51.4468% ( 200) 00:07:55.168 8822.154 - 8872.566: 53.1829% ( 240) 00:07:55.168 8872.566 - 8922.978: 54.5284% ( 186) 00:07:55.168 8922.978 - 8973.391: 55.8521% ( 183) 00:07:55.168 8973.391 - 9023.803: 57.0674% ( 168) 00:07:55.168 9023.803 - 9074.215: 58.4635% ( 193) 00:07:55.168 9074.215 - 9124.628: 59.5486% ( 150) 00:07:55.168 9124.628 - 9175.040: 60.6916% ( 158) 00:07:55.168 9175.040 - 9225.452: 61.8634% ( 162) 00:07:55.168 9225.452 - 9275.865: 63.1438% ( 177) 00:07:55.168 9275.865 - 9326.277: 64.3663% ( 169) 00:07:55.168 9326.277 - 9376.689: 65.6467% ( 177) 00:07:55.168 9376.689 - 9427.102: 67.2020% ( 215) 00:07:55.168 9427.102 - 9477.514: 69.0683% ( 258) 00:07:55.168 9477.514 - 9527.926: 70.2619% ( 165) 00:07:55.168 9527.926 - 9578.338: 71.7303% ( 203) 00:07:55.168 9578.338 - 9628.751: 73.0613% ( 184) 00:07:55.168 9628.751 - 9679.163: 74.1970% ( 157) 00:07:55.168 9679.163 - 9729.575: 75.2459% ( 145) 00:07:55.168 9729.575 - 9779.988: 76.4034% ( 160) 00:07:55.168 9779.988 - 9830.400: 77.4233% ( 141) 00:07:55.168 9830.400 - 9880.812: 78.2697% ( 117) 00:07:55.168 9880.812 - 9931.225: 79.1739% ( 125) 00:07:55.168 9931.225 - 9981.637: 80.1505% ( 135) 00:07:55.168 9981.637 - 10032.049: 81.0113% ( 119) 00:07:55.168 10032.049 - 10082.462: 82.0602% ( 145) 00:07:55.168 10082.462 - 10132.874: 83.1597% ( 152) 00:07:55.168 10132.874 - 10183.286: 83.8759% ( 99) 00:07:55.168 10183.286 - 10233.698: 84.4835% ( 84) 00:07:55.168 10233.698 - 10284.111: 84.9971% ( 71) 00:07:55.168 10284.111 - 10334.523: 85.4528% ( 63) 00:07:55.168 10334.523 - 10384.935: 86.0026% ( 76) 00:07:55.168 10384.935 - 10435.348: 86.4149% ( 57) 00:07:55.168 10435.348 - 10485.760: 86.8273% ( 57) 00:07:55.168 10485.760 - 10536.172: 87.1745% ( 48) 00:07:55.168 10536.172 - 10586.585: 87.5506% ( 52) 00:07:55.168 10586.585 - 10636.997: 87.9051% ( 49) 00:07:55.168 10636.997 - 10687.409: 88.1366% ( 32) 00:07:55.168 10687.409 - 10737.822: 88.3970% ( 36) 00:07:55.168 10737.822 - 10788.234: 88.6429% ( 34) 00:07:55.168 10788.234 - 10838.646: 88.8383% ( 27) 00:07:55.168 10838.646 - 10889.058: 89.2144% ( 52) 00:07:55.168 10889.058 - 10939.471: 89.5616% ( 48) 00:07:55.168 10939.471 - 10989.883: 89.8365% ( 38) 00:07:55.168 10989.883 - 11040.295: 90.1765% ( 47) 00:07:55.168 11040.295 - 11090.708: 90.4586% ( 39) 00:07:55.168 11090.708 - 11141.120: 90.7480% ( 40) 00:07:55.168 11141.120 - 11191.532: 90.9433% ( 27) 00:07:55.168 11191.532 - 11241.945: 91.1169% ( 24) 00:07:55.168 11241.945 - 11292.357: 91.3484% ( 32) 00:07:55.168 11292.357 - 11342.769: 91.4714% ( 17) 00:07:55.168 11342.769 - 11393.182: 91.6450% ( 24) 00:07:55.168 11393.182 - 11443.594: 91.8764% ( 32) 00:07:55.168 11443.594 - 11494.006: 92.0790% ( 28) 00:07:55.168 11494.006 - 11544.418: 92.3466% ( 37) 00:07:55.168 11544.418 - 11594.831: 92.6505% ( 42) 00:07:55.169 11594.831 - 11645.243: 92.8385% ( 26) 00:07:55.169 11645.243 - 11695.655: 93.0049% ( 23) 00:07:55.169 11695.655 - 11746.068: 93.1785% ( 24) 00:07:55.169 11746.068 - 11796.480: 93.3087% ( 18) 00:07:55.169 11796.480 - 11846.892: 93.4389% ( 18) 00:07:55.169 11846.892 - 11897.305: 93.5547% ( 16) 00:07:55.169 11897.305 - 11947.717: 93.7283% ( 24) 00:07:55.169 11947.717 - 11998.129: 93.9670% ( 33) 00:07:55.169 11998.129 - 12048.542: 94.2491% ( 39) 00:07:55.169 12048.542 - 12098.954: 94.5023% ( 35) 00:07:55.169 12098.954 - 12149.366: 94.7193% ( 30) 00:07:55.169 12149.366 - 12199.778: 94.8568% ( 19) 00:07:55.169 12199.778 - 12250.191: 94.9580% ( 14) 00:07:55.169 12250.191 - 12300.603: 95.0521% ( 13) 00:07:55.169 12300.603 - 12351.015: 95.1317% ( 11) 00:07:55.169 12351.015 - 12401.428: 95.2040% ( 10) 00:07:55.169 12401.428 - 12451.840: 95.2329% ( 4) 00:07:55.169 12451.840 - 12502.252: 95.2619% ( 4) 00:07:55.169 12502.252 - 12552.665: 95.2836% ( 3) 00:07:55.169 12552.665 - 12603.077: 95.3125% ( 4) 00:07:55.169 12603.077 - 12653.489: 95.3559% ( 6) 00:07:55.169 12653.489 - 12703.902: 95.3704% ( 2) 00:07:55.169 12703.902 - 12754.314: 95.3993% ( 4) 00:07:55.169 12754.314 - 12804.726: 95.4499% ( 7) 00:07:55.169 12804.726 - 12855.138: 95.4933% ( 6) 00:07:55.169 12855.138 - 12905.551: 95.5223% ( 4) 00:07:55.169 12905.551 - 13006.375: 95.6019% ( 11) 00:07:55.169 13006.375 - 13107.200: 95.6959% ( 13) 00:07:55.169 13107.200 - 13208.025: 95.8261% ( 18) 00:07:55.169 13208.025 - 13308.849: 96.0431% ( 30) 00:07:55.169 13308.849 - 13409.674: 96.1878% ( 20) 00:07:55.169 13409.674 - 13510.498: 96.3108% ( 17) 00:07:55.169 13510.498 - 13611.323: 96.4337% ( 17) 00:07:55.169 13611.323 - 13712.148: 96.5784% ( 20) 00:07:55.169 13712.148 - 13812.972: 96.7665% ( 26) 00:07:55.169 13812.972 - 13913.797: 96.9907% ( 31) 00:07:55.169 13913.797 - 14014.622: 97.2005% ( 29) 00:07:55.169 14014.622 - 14115.446: 97.5043% ( 42) 00:07:55.169 14115.446 - 14216.271: 97.6997% ( 27) 00:07:55.169 14216.271 - 14317.095: 97.8950% ( 27) 00:07:55.169 14317.095 - 14417.920: 98.0396% ( 20) 00:07:55.169 14417.920 - 14518.745: 98.1337% ( 13) 00:07:55.169 14518.745 - 14619.569: 98.1916% ( 8) 00:07:55.169 14619.569 - 14720.394: 98.4375% ( 34) 00:07:55.169 14720.394 - 14821.218: 98.5315% ( 13) 00:07:55.169 14821.218 - 14922.043: 98.5822% ( 7) 00:07:55.169 14922.043 - 15022.868: 98.6111% ( 4) 00:07:55.169 15426.166 - 15526.991: 98.6400% ( 4) 00:07:55.169 15526.991 - 15627.815: 98.6834% ( 6) 00:07:55.169 15627.815 - 15728.640: 98.7413% ( 8) 00:07:55.169 15728.640 - 15829.465: 98.8137% ( 10) 00:07:55.169 15829.465 - 15930.289: 98.9222% ( 15) 00:07:55.169 15930.289 - 16031.114: 98.9583% ( 5) 00:07:55.169 16031.114 - 16131.938: 98.9873% ( 4) 00:07:55.169 16131.938 - 16232.763: 99.0090% ( 3) 00:07:55.169 16232.763 - 16333.588: 99.0379% ( 4) 00:07:55.169 16333.588 - 16434.412: 99.0596% ( 3) 00:07:55.169 16434.412 - 16535.237: 99.0668% ( 1) 00:07:55.169 16535.237 - 16636.062: 99.0741% ( 1) 00:07:55.169 20769.871 - 20870.695: 99.0813% ( 1) 00:07:55.169 20870.695 - 20971.520: 99.0885% ( 1) 00:07:55.169 20971.520 - 21072.345: 99.1102% ( 3) 00:07:55.169 21072.345 - 21173.169: 99.1464% ( 5) 00:07:55.169 21173.169 - 21273.994: 99.1826% ( 5) 00:07:55.169 21273.994 - 21374.818: 99.2115% ( 4) 00:07:55.169 21374.818 - 21475.643: 99.2549% ( 6) 00:07:55.169 21475.643 - 21576.468: 99.2911% ( 5) 00:07:55.169 21576.468 - 21677.292: 99.3345% ( 6) 00:07:55.169 21677.292 - 21778.117: 99.3634% ( 4) 00:07:55.169 21778.117 - 21878.942: 99.3924% ( 4) 00:07:55.169 21878.942 - 21979.766: 99.4213% ( 4) 00:07:55.169 21979.766 - 22080.591: 99.4430% ( 3) 00:07:55.169 22080.591 - 22181.415: 99.4719% ( 4) 00:07:55.169 22181.415 - 22282.240: 99.5009% ( 4) 00:07:55.169 22282.240 - 22383.065: 99.5298% ( 4) 00:07:55.169 22383.065 - 22483.889: 99.5370% ( 1) 00:07:55.169 27020.997 - 27222.646: 99.5443% ( 1) 00:07:55.169 27222.646 - 27424.295: 99.5660% ( 3) 00:07:55.169 27424.295 - 27625.945: 99.5732% ( 1) 00:07:55.169 27625.945 - 27827.594: 99.7541% ( 25) 00:07:55.169 27827.594 - 28029.243: 99.8119% ( 8) 00:07:55.169 28029.243 - 28230.892: 99.8626% ( 7) 00:07:55.169 28230.892 - 28432.542: 99.9132% ( 7) 00:07:55.169 28432.542 - 28634.191: 99.9783% ( 9) 00:07:55.169 28634.191 - 28835.840: 100.0000% ( 3) 00:07:55.169 00:07:55.169 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:55.169 ============================================================================== 00:07:55.169 Range in us Cumulative IO count 00:07:55.169 7007.311 - 7057.723: 0.0145% ( 2) 00:07:55.169 7057.723 - 7108.135: 0.0796% ( 9) 00:07:55.169 7108.135 - 7158.548: 0.1302% ( 7) 00:07:55.169 7158.548 - 7208.960: 0.1664% ( 5) 00:07:55.169 7208.960 - 7259.372: 0.2098% ( 6) 00:07:55.169 7259.372 - 7309.785: 0.2749% ( 9) 00:07:55.169 7309.785 - 7360.197: 0.3038% ( 4) 00:07:55.169 7360.197 - 7410.609: 0.3762% ( 10) 00:07:55.169 7410.609 - 7461.022: 0.4774% ( 14) 00:07:55.169 7461.022 - 7511.434: 0.6293% ( 21) 00:07:55.169 7511.434 - 7561.846: 0.8391% ( 29) 00:07:55.169 7561.846 - 7612.258: 1.1574% ( 44) 00:07:55.169 7612.258 - 7662.671: 1.7723% ( 85) 00:07:55.169 7662.671 - 7713.083: 2.5535% ( 108) 00:07:55.169 7713.083 - 7763.495: 3.5084% ( 132) 00:07:55.169 7763.495 - 7813.908: 4.6152% ( 153) 00:07:55.169 7813.908 - 7864.320: 6.2066% ( 220) 00:07:55.169 7864.320 - 7914.732: 7.8921% ( 233) 00:07:55.169 7914.732 - 7965.145: 10.1056% ( 306) 00:07:55.169 7965.145 - 8015.557: 13.0642% ( 409) 00:07:55.169 8015.557 - 8065.969: 16.1386% ( 425) 00:07:55.169 8065.969 - 8116.382: 19.0321% ( 400) 00:07:55.169 8116.382 - 8166.794: 22.1137% ( 426) 00:07:55.169 8166.794 - 8217.206: 25.6076% ( 483) 00:07:55.169 8217.206 - 8267.618: 28.9135% ( 457) 00:07:55.169 8267.618 - 8318.031: 31.5972% ( 371) 00:07:55.169 8318.031 - 8368.443: 34.4835% ( 399) 00:07:55.169 8368.443 - 8418.855: 36.8056% ( 321) 00:07:55.169 8418.855 - 8469.268: 39.2361% ( 336) 00:07:55.169 8469.268 - 8519.680: 41.9922% ( 381) 00:07:55.169 8519.680 - 8570.092: 44.1623% ( 300) 00:07:55.169 8570.092 - 8620.505: 46.1444% ( 274) 00:07:55.169 8620.505 - 8670.917: 47.8660% ( 238) 00:07:55.169 8670.917 - 8721.329: 49.3851% ( 210) 00:07:55.169 8721.329 - 8771.742: 50.8898% ( 208) 00:07:55.169 8771.742 - 8822.154: 52.5318% ( 227) 00:07:55.169 8822.154 - 8872.566: 53.7254% ( 165) 00:07:55.169 8872.566 - 8922.978: 54.6296% ( 125) 00:07:55.169 8922.978 - 8973.391: 55.9462% ( 182) 00:07:55.169 8973.391 - 9023.803: 57.1253% ( 163) 00:07:55.169 9023.803 - 9074.215: 58.4925% ( 189) 00:07:55.169 9074.215 - 9124.628: 59.8452% ( 187) 00:07:55.169 9124.628 - 9175.040: 61.1183% ( 176) 00:07:55.169 9175.040 - 9225.452: 62.3553% ( 171) 00:07:55.169 9225.452 - 9275.865: 63.9323% ( 218) 00:07:55.169 9275.865 - 9326.277: 65.3429% ( 195) 00:07:55.169 9326.277 - 9376.689: 66.7028% ( 188) 00:07:55.169 9376.689 - 9427.102: 67.7951% ( 151) 00:07:55.169 9427.102 - 9477.514: 69.1768% ( 191) 00:07:55.169 9477.514 - 9527.926: 70.2980% ( 155) 00:07:55.169 9527.926 - 9578.338: 71.2963% ( 138) 00:07:55.169 9578.338 - 9628.751: 72.5911% ( 179) 00:07:55.169 9628.751 - 9679.163: 73.7486% ( 160) 00:07:55.169 9679.163 - 9729.575: 74.9060% ( 160) 00:07:55.169 9729.575 - 9779.988: 76.2153% ( 181) 00:07:55.169 9779.988 - 9830.400: 77.1557% ( 130) 00:07:55.169 9830.400 - 9880.812: 78.3854% ( 170) 00:07:55.169 9880.812 - 9931.225: 79.4777% ( 151) 00:07:55.169 9931.225 - 9981.637: 80.1939% ( 99) 00:07:55.169 9981.637 - 10032.049: 80.8377% ( 89) 00:07:55.169 10032.049 - 10082.462: 81.4887% ( 90) 00:07:55.169 10082.462 - 10132.874: 82.4074% ( 127) 00:07:55.169 10132.874 - 10183.286: 83.2538% ( 117) 00:07:55.169 10183.286 - 10233.698: 83.7891% ( 74) 00:07:55.169 10233.698 - 10284.111: 84.2954% ( 70) 00:07:55.169 10284.111 - 10334.523: 84.8380% ( 75) 00:07:55.169 10334.523 - 10384.935: 85.3443% ( 70) 00:07:55.169 10384.935 - 10435.348: 85.9954% ( 90) 00:07:55.169 10435.348 - 10485.760: 86.4366% ( 61) 00:07:55.169 10485.760 - 10536.172: 86.7549% ( 44) 00:07:55.169 10536.172 - 10586.585: 87.0515% ( 41) 00:07:55.169 10586.585 - 10636.997: 87.2685% ( 30) 00:07:55.169 10636.997 - 10687.409: 87.5145% ( 34) 00:07:55.169 10687.409 - 10737.822: 87.8111% ( 41) 00:07:55.169 10737.822 - 10788.234: 88.1076% ( 41) 00:07:55.169 10788.234 - 10838.646: 88.3753% ( 37) 00:07:55.169 10838.646 - 10889.058: 88.5995% ( 31) 00:07:55.169 10889.058 - 10939.471: 88.9106% ( 43) 00:07:55.169 10939.471 - 10989.883: 89.2506% ( 47) 00:07:55.169 10989.883 - 11040.295: 89.6412% ( 54) 00:07:55.169 11040.295 - 11090.708: 89.9378% ( 41) 00:07:55.169 11090.708 - 11141.120: 90.2271% ( 40) 00:07:55.169 11141.120 - 11191.532: 90.5237% ( 41) 00:07:55.169 11191.532 - 11241.945: 90.7552% ( 32) 00:07:55.169 11241.945 - 11292.357: 91.0012% ( 34) 00:07:55.169 11292.357 - 11342.769: 91.2688% ( 37) 00:07:55.169 11342.769 - 11393.182: 91.5871% ( 44) 00:07:55.169 11393.182 - 11443.594: 91.8692% ( 39) 00:07:55.169 11443.594 - 11494.006: 92.1658% ( 41) 00:07:55.169 11494.006 - 11544.418: 92.5637% ( 55) 00:07:55.169 11544.418 - 11594.831: 92.8602% ( 41) 00:07:55.169 11594.831 - 11645.243: 93.1424% ( 39) 00:07:55.169 11645.243 - 11695.655: 93.5402% ( 55) 00:07:55.169 11695.655 - 11746.068: 93.8368% ( 41) 00:07:55.169 11746.068 - 11796.480: 94.0900% ( 35) 00:07:55.169 11796.480 - 11846.892: 94.2998% ( 29) 00:07:55.169 11846.892 - 11897.305: 94.4734% ( 24) 00:07:55.169 11897.305 - 11947.717: 94.6108% ( 19) 00:07:55.169 11947.717 - 11998.129: 94.7555% ( 20) 00:07:55.169 11998.129 - 12048.542: 94.8640% ( 15) 00:07:55.169 12048.542 - 12098.954: 95.0231% ( 22) 00:07:55.169 12098.954 - 12149.366: 95.1317% ( 15) 00:07:55.169 12149.366 - 12199.778: 95.1968% ( 9) 00:07:55.169 12199.778 - 12250.191: 95.2836% ( 12) 00:07:55.169 12250.191 - 12300.603: 95.3704% ( 12) 00:07:55.170 12300.603 - 12351.015: 95.4282% ( 8) 00:07:55.170 12351.015 - 12401.428: 95.4861% ( 8) 00:07:55.170 12401.428 - 12451.840: 95.5223% ( 5) 00:07:55.170 12451.840 - 12502.252: 95.5512% ( 4) 00:07:55.170 12502.252 - 12552.665: 95.6091% ( 8) 00:07:55.170 12552.665 - 12603.077: 95.6670% ( 8) 00:07:55.170 12603.077 - 12653.489: 95.7538% ( 12) 00:07:55.170 12653.489 - 12703.902: 95.8478% ( 13) 00:07:55.170 12703.902 - 12754.314: 95.8984% ( 7) 00:07:55.170 12754.314 - 12804.726: 95.9274% ( 4) 00:07:55.170 12804.726 - 12855.138: 95.9635% ( 5) 00:07:55.170 12855.138 - 12905.551: 95.9997% ( 5) 00:07:55.170 12905.551 - 13006.375: 96.0865% ( 12) 00:07:55.170 13006.375 - 13107.200: 96.2167% ( 18) 00:07:55.170 13107.200 - 13208.025: 96.3252% ( 15) 00:07:55.170 13208.025 - 13308.849: 96.3976% ( 10) 00:07:55.170 13308.849 - 13409.674: 96.5350% ( 19) 00:07:55.170 13409.674 - 13510.498: 96.6001% ( 9) 00:07:55.170 13510.498 - 13611.323: 96.6218% ( 3) 00:07:55.170 13611.323 - 13712.148: 96.6508% ( 4) 00:07:55.170 13712.148 - 13812.972: 96.6942% ( 6) 00:07:55.170 13812.972 - 13913.797: 96.8388% ( 20) 00:07:55.170 13913.797 - 14014.622: 96.9980% ( 22) 00:07:55.170 14014.622 - 14115.446: 97.1499% ( 21) 00:07:55.170 14115.446 - 14216.271: 97.2873% ( 19) 00:07:55.170 14216.271 - 14317.095: 97.4103% ( 17) 00:07:55.170 14317.095 - 14417.920: 97.5477% ( 19) 00:07:55.170 14417.920 - 14518.745: 97.7069% ( 22) 00:07:55.170 14518.745 - 14619.569: 97.8371% ( 18) 00:07:55.170 14619.569 - 14720.394: 97.9311% ( 13) 00:07:55.170 14720.394 - 14821.218: 98.0324% ( 14) 00:07:55.170 14821.218 - 14922.043: 98.1192% ( 12) 00:07:55.170 14922.043 - 15022.868: 98.2205% ( 14) 00:07:55.170 15022.868 - 15123.692: 98.4809% ( 36) 00:07:55.170 15123.692 - 15224.517: 98.7196% ( 33) 00:07:55.170 15224.517 - 15325.342: 98.8498% ( 18) 00:07:55.170 15325.342 - 15426.166: 98.9077% ( 8) 00:07:55.170 15426.166 - 15526.991: 98.9294% ( 3) 00:07:55.170 15526.991 - 15627.815: 98.9511% ( 3) 00:07:55.170 15627.815 - 15728.640: 98.9800% ( 4) 00:07:55.170 15728.640 - 15829.465: 99.0017% ( 3) 00:07:55.170 15829.465 - 15930.289: 99.0234% ( 3) 00:07:55.170 15930.289 - 16031.114: 99.0379% ( 2) 00:07:55.170 16031.114 - 16131.938: 99.0596% ( 3) 00:07:55.170 16131.938 - 16232.763: 99.0668% ( 1) 00:07:55.170 16232.763 - 16333.588: 99.0741% ( 1) 00:07:55.170 20669.046 - 20769.871: 99.0885% ( 2) 00:07:55.170 20769.871 - 20870.695: 99.1247% ( 5) 00:07:55.170 20870.695 - 20971.520: 99.1609% ( 5) 00:07:55.170 20971.520 - 21072.345: 99.1970% ( 5) 00:07:55.170 21072.345 - 21173.169: 99.2332% ( 5) 00:07:55.170 21173.169 - 21273.994: 99.2694% ( 5) 00:07:55.170 21273.994 - 21374.818: 99.2983% ( 4) 00:07:55.170 21374.818 - 21475.643: 99.3273% ( 4) 00:07:55.170 21475.643 - 21576.468: 99.3634% ( 5) 00:07:55.170 21576.468 - 21677.292: 99.3924% ( 4) 00:07:55.170 21677.292 - 21778.117: 99.4213% ( 4) 00:07:55.170 21778.117 - 21878.942: 99.4430% ( 3) 00:07:55.170 21878.942 - 21979.766: 99.4792% ( 5) 00:07:55.170 21979.766 - 22080.591: 99.5081% ( 4) 00:07:55.170 22080.591 - 22181.415: 99.5370% ( 4) 00:07:55.170 26819.348 - 27020.997: 99.5587% ( 3) 00:07:55.170 27020.997 - 27222.646: 99.6600% ( 14) 00:07:55.170 27222.646 - 27424.295: 99.7830% ( 17) 00:07:55.170 27424.295 - 27625.945: 99.8626% ( 11) 00:07:55.170 27625.945 - 27827.594: 99.9204% ( 8) 00:07:55.170 27827.594 - 28029.243: 99.9783% ( 8) 00:07:55.170 28029.243 - 28230.892: 100.0000% ( 3) 00:07:55.170 00:07:55.170 18:02:29 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:55.170 00:07:55.170 real 0m2.467s 00:07:55.170 user 0m2.187s 00:07:55.170 sys 0m0.166s 00:07:55.170 18:02:29 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.170 ************************************ 00:07:55.170 END TEST nvme_perf 00:07:55.170 18:02:29 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.170 ************************************ 00:07:55.432 18:02:29 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:55.432 18:02:29 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:55.432 18:02:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.432 18:02:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.432 ************************************ 00:07:55.432 START TEST nvme_hello_world 00:07:55.432 ************************************ 00:07:55.432 18:02:29 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:55.432 Initializing NVMe Controllers 00:07:55.432 Attached to 0000:00:10.0 00:07:55.432 Namespace ID: 1 size: 6GB 00:07:55.432 Attached to 0000:00:11.0 00:07:55.432 Namespace ID: 1 size: 5GB 00:07:55.432 Attached to 0000:00:13.0 00:07:55.432 Namespace ID: 1 size: 1GB 00:07:55.432 Attached to 0000:00:12.0 00:07:55.432 Namespace ID: 1 size: 4GB 00:07:55.432 Namespace ID: 2 size: 4GB 00:07:55.432 Namespace ID: 3 size: 4GB 00:07:55.432 Initialization complete. 00:07:55.432 INFO: using host memory buffer for IO 00:07:55.432 Hello world! 00:07:55.432 INFO: using host memory buffer for IO 00:07:55.432 Hello world! 00:07:55.432 INFO: using host memory buffer for IO 00:07:55.432 Hello world! 00:07:55.432 INFO: using host memory buffer for IO 00:07:55.432 Hello world! 00:07:55.432 INFO: using host memory buffer for IO 00:07:55.432 Hello world! 00:07:55.432 INFO: using host memory buffer for IO 00:07:55.432 Hello world! 00:07:55.432 00:07:55.432 real 0m0.181s 00:07:55.432 user 0m0.064s 00:07:55.432 sys 0m0.074s 00:07:55.432 18:02:29 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.432 ************************************ 00:07:55.432 END TEST nvme_hello_world 00:07:55.432 18:02:29 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:55.432 ************************************ 00:07:55.432 18:02:29 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:55.432 18:02:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.432 18:02:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.432 18:02:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.432 ************************************ 00:07:55.432 START TEST nvme_sgl 00:07:55.432 ************************************ 00:07:55.432 18:02:29 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:55.693 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:55.693 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:55.693 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:55.693 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:55.693 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:55.693 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:55.693 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:55.693 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:55.693 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:55.693 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:55.693 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:55.693 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:55.693 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:55.693 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:55.693 NVMe Readv/Writev Request test 00:07:55.693 Attached to 0000:00:10.0 00:07:55.693 Attached to 0000:00:11.0 00:07:55.693 Attached to 0000:00:13.0 00:07:55.693 Attached to 0000:00:12.0 00:07:55.693 0000:00:10.0: build_io_request_2 test passed 00:07:55.693 0000:00:10.0: build_io_request_4 test passed 00:07:55.693 0000:00:10.0: build_io_request_5 test passed 00:07:55.693 0000:00:10.0: build_io_request_6 test passed 00:07:55.693 0000:00:10.0: build_io_request_7 test passed 00:07:55.693 0000:00:10.0: build_io_request_10 test passed 00:07:55.693 0000:00:11.0: build_io_request_2 test passed 00:07:55.693 0000:00:11.0: build_io_request_4 test passed 00:07:55.693 0000:00:11.0: build_io_request_5 test passed 00:07:55.693 0000:00:11.0: build_io_request_6 test passed 00:07:55.693 0000:00:11.0: build_io_request_7 test passed 00:07:55.693 0000:00:11.0: build_io_request_10 test passed 00:07:55.693 Cleaning up... 00:07:55.693 00:07:55.693 real 0m0.222s 00:07:55.693 user 0m0.121s 00:07:55.693 sys 0m0.070s 00:07:55.693 18:02:30 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.693 ************************************ 00:07:55.693 END TEST nvme_sgl 00:07:55.693 ************************************ 00:07:55.693 18:02:30 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:55.693 18:02:30 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:55.693 18:02:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.693 18:02:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.693 18:02:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.693 ************************************ 00:07:55.693 START TEST nvme_e2edp 00:07:55.693 ************************************ 00:07:55.693 18:02:30 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:55.954 NVMe Write/Read with End-to-End data protection test 00:07:55.954 Attached to 0000:00:10.0 00:07:55.954 Attached to 0000:00:11.0 00:07:55.954 Attached to 0000:00:13.0 00:07:55.954 Attached to 0000:00:12.0 00:07:55.954 Cleaning up... 00:07:55.954 00:07:55.954 real 0m0.184s 00:07:55.954 user 0m0.071s 00:07:55.954 sys 0m0.066s 00:07:55.954 18:02:30 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.954 18:02:30 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:55.954 ************************************ 00:07:55.954 END TEST nvme_e2edp 00:07:55.954 ************************************ 00:07:55.954 18:02:30 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:55.954 18:02:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.954 18:02:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.954 18:02:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.954 ************************************ 00:07:55.954 START TEST nvme_reserve 00:07:55.954 ************************************ 00:07:55.954 18:02:30 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:56.215 ===================================================== 00:07:56.215 NVMe Controller at PCI bus 0, device 16, function 0 00:07:56.215 ===================================================== 00:07:56.215 Reservations: Not Supported 00:07:56.215 ===================================================== 00:07:56.215 NVMe Controller at PCI bus 0, device 17, function 0 00:07:56.215 ===================================================== 00:07:56.215 Reservations: Not Supported 00:07:56.215 ===================================================== 00:07:56.215 NVMe Controller at PCI bus 0, device 19, function 0 00:07:56.215 ===================================================== 00:07:56.215 Reservations: Not Supported 00:07:56.215 ===================================================== 00:07:56.215 NVMe Controller at PCI bus 0, device 18, function 0 00:07:56.215 ===================================================== 00:07:56.215 Reservations: Not Supported 00:07:56.215 Reservation test passed 00:07:56.215 00:07:56.215 real 0m0.178s 00:07:56.215 user 0m0.049s 00:07:56.215 sys 0m0.087s 00:07:56.215 18:02:30 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.215 18:02:30 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:56.215 ************************************ 00:07:56.215 END TEST nvme_reserve 00:07:56.215 ************************************ 00:07:56.215 18:02:30 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:56.215 18:02:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.215 18:02:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.215 18:02:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.215 ************************************ 00:07:56.215 START TEST nvme_err_injection 00:07:56.215 ************************************ 00:07:56.215 18:02:30 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:56.476 NVMe Error Injection test 00:07:56.476 Attached to 0000:00:10.0 00:07:56.476 Attached to 0000:00:11.0 00:07:56.476 Attached to 0000:00:13.0 00:07:56.476 Attached to 0000:00:12.0 00:07:56.476 0000:00:10.0: get features failed as expected 00:07:56.476 0000:00:11.0: get features failed as expected 00:07:56.476 0000:00:13.0: get features failed as expected 00:07:56.476 0000:00:12.0: get features failed as expected 00:07:56.476 0000:00:10.0: get features successfully as expected 00:07:56.476 0000:00:11.0: get features successfully as expected 00:07:56.476 0000:00:13.0: get features successfully as expected 00:07:56.476 0000:00:12.0: get features successfully as expected 00:07:56.476 0000:00:10.0: read failed as expected 00:07:56.476 0000:00:11.0: read failed as expected 00:07:56.476 0000:00:13.0: read failed as expected 00:07:56.476 0000:00:12.0: read failed as expected 00:07:56.476 0000:00:13.0: read successfully as expected 00:07:56.476 0000:00:10.0: read successfully as expected 00:07:56.476 0000:00:11.0: read successfully as expected 00:07:56.476 0000:00:12.0: read successfully as expected 00:07:56.476 Cleaning up... 00:07:56.476 00:07:56.476 real 0m0.188s 00:07:56.476 user 0m0.071s 00:07:56.476 sys 0m0.073s 00:07:56.476 18:02:30 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.476 18:02:30 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:56.476 ************************************ 00:07:56.476 END TEST nvme_err_injection 00:07:56.476 ************************************ 00:07:56.476 18:02:30 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:56.476 18:02:30 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:56.476 18:02:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.476 18:02:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.476 ************************************ 00:07:56.476 START TEST nvme_overhead 00:07:56.476 ************************************ 00:07:56.476 18:02:30 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:57.862 Initializing NVMe Controllers 00:07:57.862 Attached to 0000:00:10.0 00:07:57.862 Attached to 0000:00:11.0 00:07:57.862 Attached to 0000:00:13.0 00:07:57.862 Attached to 0000:00:12.0 00:07:57.862 Initialization complete. Launching workers. 00:07:57.862 submit (in ns) avg, min, max = 11496.6, 10658.5, 65661.5 00:07:57.862 complete (in ns) avg, min, max = 8022.2, 7274.6, 897364.6 00:07:57.862 00:07:57.862 Submit histogram 00:07:57.862 ================ 00:07:57.862 Range in us Cumulative Count 00:07:57.862 10.634 - 10.683: 0.0086% ( 1) 00:07:57.862 10.683 - 10.732: 0.1207% ( 13) 00:07:57.862 10.732 - 10.782: 0.3104% ( 22) 00:07:57.862 10.782 - 10.831: 1.0175% ( 82) 00:07:57.862 10.831 - 10.880: 3.1301% ( 245) 00:07:57.862 10.880 - 10.929: 7.6485% ( 524) 00:07:57.862 10.929 - 10.978: 14.9090% ( 842) 00:07:57.862 10.978 - 11.028: 24.3597% ( 1096) 00:07:57.862 11.028 - 11.077: 34.8797% ( 1220) 00:07:57.862 11.077 - 11.126: 44.6840% ( 1137) 00:07:57.862 11.126 - 11.175: 52.7292% ( 933) 00:07:57.862 11.175 - 11.225: 59.7999% ( 820) 00:07:57.862 11.225 - 11.274: 66.0947% ( 730) 00:07:57.862 11.274 - 11.323: 71.8548% ( 668) 00:07:57.862 11.323 - 11.372: 76.5888% ( 549) 00:07:57.862 11.372 - 11.422: 80.1932% ( 418) 00:07:57.862 11.422 - 11.471: 83.0042% ( 326) 00:07:57.862 11.471 - 11.520: 84.9703% ( 228) 00:07:57.862 11.520 - 11.569: 86.2723% ( 151) 00:07:57.862 11.569 - 11.618: 87.3416% ( 124) 00:07:57.862 11.618 - 11.668: 88.0141% ( 78) 00:07:57.862 11.668 - 11.717: 88.6350% ( 72) 00:07:57.862 11.717 - 11.766: 89.2817% ( 75) 00:07:57.862 11.766 - 11.815: 89.8767% ( 69) 00:07:57.862 11.815 - 11.865: 90.5148% ( 74) 00:07:57.862 11.865 - 11.914: 91.1356% ( 72) 00:07:57.862 11.914 - 11.963: 91.6875% ( 64) 00:07:57.862 11.963 - 12.012: 92.1876% ( 58) 00:07:57.862 12.012 - 12.062: 92.7136% ( 61) 00:07:57.862 12.062 - 12.111: 93.1103% ( 46) 00:07:57.862 12.111 - 12.160: 93.4380% ( 38) 00:07:57.862 12.160 - 12.209: 93.7570% ( 37) 00:07:57.862 12.209 - 12.258: 93.9381% ( 21) 00:07:57.862 12.258 - 12.308: 94.1192% ( 21) 00:07:57.862 12.308 - 12.357: 94.2140% ( 11) 00:07:57.862 12.357 - 12.406: 94.2658% ( 6) 00:07:57.862 12.406 - 12.455: 94.3261% ( 7) 00:07:57.862 12.455 - 12.505: 94.3434% ( 2) 00:07:57.862 12.505 - 12.554: 94.3606% ( 2) 00:07:57.862 12.554 - 12.603: 94.3865% ( 3) 00:07:57.862 12.603 - 12.702: 94.4210% ( 4) 00:07:57.862 12.702 - 12.800: 94.4727% ( 6) 00:07:57.862 12.800 - 12.898: 94.5158% ( 5) 00:07:57.862 12.898 - 12.997: 94.5762% ( 7) 00:07:57.862 12.997 - 13.095: 94.6883% ( 13) 00:07:57.862 13.095 - 13.194: 94.8262% ( 16) 00:07:57.862 13.194 - 13.292: 94.9815% ( 18) 00:07:57.862 13.292 - 13.391: 95.1712% ( 22) 00:07:57.862 13.391 - 13.489: 95.4126% ( 28) 00:07:57.862 13.489 - 13.588: 95.6109% ( 23) 00:07:57.862 13.588 - 13.686: 95.7748% ( 19) 00:07:57.862 13.686 - 13.785: 95.9472% ( 20) 00:07:57.862 13.785 - 13.883: 96.0162% ( 8) 00:07:57.862 13.883 - 13.982: 96.0679% ( 6) 00:07:57.862 13.982 - 14.080: 96.1283% ( 7) 00:07:57.862 14.080 - 14.178: 96.1800% ( 6) 00:07:57.862 14.178 - 14.277: 96.2835% ( 12) 00:07:57.862 14.277 - 14.375: 96.3094% ( 3) 00:07:57.862 14.375 - 14.474: 96.3439% ( 4) 00:07:57.862 14.474 - 14.572: 96.3870% ( 5) 00:07:57.862 14.572 - 14.671: 96.4215% ( 4) 00:07:57.862 14.671 - 14.769: 96.4474% ( 3) 00:07:57.862 14.769 - 14.868: 96.4560% ( 1) 00:07:57.862 14.868 - 14.966: 96.4646% ( 1) 00:07:57.862 14.966 - 15.065: 96.4905% ( 3) 00:07:57.862 15.065 - 15.163: 96.5422% ( 6) 00:07:57.862 15.163 - 15.262: 96.6112% ( 8) 00:07:57.862 15.262 - 15.360: 96.6543% ( 5) 00:07:57.862 15.360 - 15.458: 96.6974% ( 5) 00:07:57.862 15.458 - 15.557: 96.7664% ( 8) 00:07:57.862 15.557 - 15.655: 96.7923% ( 3) 00:07:57.862 15.655 - 15.754: 96.8440% ( 6) 00:07:57.862 15.754 - 15.852: 96.8785% ( 4) 00:07:57.862 15.852 - 15.951: 96.9302% ( 6) 00:07:57.862 15.951 - 16.049: 96.9906% ( 7) 00:07:57.862 16.049 - 16.148: 97.0165% ( 3) 00:07:57.862 16.148 - 16.246: 97.0423% ( 3) 00:07:57.862 16.246 - 16.345: 97.1113% ( 8) 00:07:57.862 16.345 - 16.443: 97.3441% ( 27) 00:07:57.862 16.443 - 16.542: 97.6373% ( 34) 00:07:57.862 16.542 - 16.640: 97.7925% ( 18) 00:07:57.862 16.640 - 16.738: 97.9822% ( 22) 00:07:57.862 16.738 - 16.837: 98.0857% ( 12) 00:07:57.862 16.837 - 16.935: 98.2668% ( 21) 00:07:57.862 16.935 - 17.034: 98.3530% ( 10) 00:07:57.862 17.034 - 17.132: 98.4651% ( 13) 00:07:57.862 17.132 - 17.231: 98.5600% ( 11) 00:07:57.862 17.231 - 17.329: 98.7066% ( 17) 00:07:57.862 17.329 - 17.428: 98.8790% ( 20) 00:07:57.862 17.428 - 17.526: 98.9997% ( 14) 00:07:57.862 17.526 - 17.625: 99.1205% ( 14) 00:07:57.862 17.625 - 17.723: 99.2412% ( 14) 00:07:57.862 17.723 - 17.822: 99.3188% ( 9) 00:07:57.862 17.822 - 17.920: 99.3447% ( 3) 00:07:57.862 17.920 - 18.018: 99.3791% ( 4) 00:07:57.862 18.018 - 18.117: 99.4050% ( 3) 00:07:57.862 18.117 - 18.215: 99.4309% ( 3) 00:07:57.862 18.215 - 18.314: 99.4481% ( 2) 00:07:57.862 18.314 - 18.412: 99.4568% ( 1) 00:07:57.862 18.412 - 18.511: 99.4654% ( 1) 00:07:57.862 18.511 - 18.609: 99.4826% ( 2) 00:07:57.862 18.609 - 18.708: 99.5085% ( 3) 00:07:57.862 18.708 - 18.806: 99.5171% ( 1) 00:07:57.862 18.806 - 18.905: 99.5257% ( 1) 00:07:57.862 18.905 - 19.003: 99.5344% ( 1) 00:07:57.862 19.102 - 19.200: 99.5602% ( 3) 00:07:57.862 19.200 - 19.298: 99.5775% ( 2) 00:07:57.862 19.298 - 19.397: 99.5861% ( 1) 00:07:57.862 19.397 - 19.495: 99.6033% ( 2) 00:07:57.862 19.495 - 19.594: 99.6206% ( 2) 00:07:57.862 19.594 - 19.692: 99.6465% ( 3) 00:07:57.862 19.791 - 19.889: 99.6551% ( 1) 00:07:57.862 19.988 - 20.086: 99.6637% ( 1) 00:07:57.862 20.086 - 20.185: 99.6723% ( 1) 00:07:57.862 20.185 - 20.283: 99.6896% ( 2) 00:07:57.862 20.283 - 20.382: 99.7068% ( 2) 00:07:57.862 20.480 - 20.578: 99.7154% ( 1) 00:07:57.862 20.775 - 20.874: 99.7327% ( 2) 00:07:57.862 21.169 - 21.268: 99.7413% ( 1) 00:07:57.862 21.268 - 21.366: 99.7499% ( 1) 00:07:57.862 21.366 - 21.465: 99.7586% ( 1) 00:07:57.862 21.465 - 21.563: 99.7672% ( 1) 00:07:57.862 21.858 - 21.957: 99.7758% ( 1) 00:07:57.862 22.351 - 22.449: 99.7844% ( 1) 00:07:57.862 22.942 - 23.040: 99.7930% ( 1) 00:07:57.862 23.040 - 23.138: 99.8103% ( 2) 00:07:57.862 23.138 - 23.237: 99.8189% ( 1) 00:07:57.862 23.631 - 23.729: 99.8275% ( 1) 00:07:57.862 24.911 - 25.009: 99.8362% ( 1) 00:07:57.862 25.108 - 25.206: 99.8448% ( 1) 00:07:57.862 25.797 - 25.994: 99.8534% ( 1) 00:07:57.862 26.191 - 26.388: 99.8620% ( 1) 00:07:57.862 26.388 - 26.585: 99.8707% ( 1) 00:07:57.862 27.569 - 27.766: 99.8793% ( 1) 00:07:57.863 28.554 - 28.751: 99.8879% ( 1) 00:07:57.863 29.342 - 29.538: 99.8965% ( 1) 00:07:57.863 29.932 - 30.129: 99.9051% ( 1) 00:07:57.863 31.508 - 31.705: 99.9138% ( 1) 00:07:57.863 33.871 - 34.068: 99.9224% ( 1) 00:07:57.863 34.855 - 35.052: 99.9310% ( 1) 00:07:57.863 35.052 - 35.249: 99.9396% ( 1) 00:07:57.863 40.369 - 40.566: 99.9483% ( 1) 00:07:57.863 42.338 - 42.535: 99.9569% ( 1) 00:07:57.863 45.489 - 45.686: 99.9655% ( 1) 00:07:57.863 50.018 - 50.215: 99.9741% ( 1) 00:07:57.863 51.594 - 51.988: 99.9828% ( 1) 00:07:57.863 63.015 - 63.409: 99.9914% ( 1) 00:07:57.863 65.378 - 65.772: 100.0000% ( 1) 00:07:57.863 00:07:57.863 Complete histogram 00:07:57.863 ================== 00:07:57.863 Range in us Cumulative Count 00:07:57.863 7.237 - 7.286: 0.0259% ( 3) 00:07:57.863 7.286 - 7.335: 0.2759% ( 29) 00:07:57.863 7.335 - 7.385: 1.5521% ( 148) 00:07:57.863 7.385 - 7.434: 6.4413% ( 567) 00:07:57.863 7.434 - 7.483: 17.0561% ( 1231) 00:07:57.863 7.483 - 7.532: 30.9735% ( 1614) 00:07:57.863 7.532 - 7.582: 44.8651% ( 1611) 00:07:57.863 7.582 - 7.631: 54.2468% ( 1088) 00:07:57.863 7.631 - 7.680: 59.7396% ( 637) 00:07:57.863 7.680 - 7.729: 63.0163% ( 380) 00:07:57.863 7.729 - 7.778: 65.0427% ( 235) 00:07:57.863 7.778 - 7.828: 65.9998% ( 111) 00:07:57.863 7.828 - 7.877: 66.6207% ( 72) 00:07:57.863 7.877 - 7.926: 67.0863% ( 54) 00:07:57.863 7.926 - 7.975: 67.4399% ( 41) 00:07:57.863 7.975 - 8.025: 68.3711% ( 108) 00:07:57.863 8.025 - 8.074: 71.0011% ( 305) 00:07:57.863 8.074 - 8.123: 74.8642% ( 448) 00:07:57.863 8.123 - 8.172: 78.7962% ( 456) 00:07:57.863 8.172 - 8.222: 82.3489% ( 412) 00:07:57.863 8.222 - 8.271: 85.6342% ( 381) 00:07:57.863 8.271 - 8.320: 88.6177% ( 346) 00:07:57.863 8.320 - 8.369: 91.2650% ( 307) 00:07:57.863 8.369 - 8.418: 92.8085% ( 179) 00:07:57.863 8.418 - 8.468: 93.9295% ( 130) 00:07:57.863 8.468 - 8.517: 94.6969% ( 89) 00:07:57.863 8.517 - 8.566: 95.2660% ( 66) 00:07:57.863 8.566 - 8.615: 95.7661% ( 58) 00:07:57.863 8.615 - 8.665: 96.0335% ( 31) 00:07:57.863 8.665 - 8.714: 96.1714% ( 16) 00:07:57.863 8.714 - 8.763: 96.2404% ( 8) 00:07:57.863 8.763 - 8.812: 96.3180% ( 9) 00:07:57.863 8.812 - 8.862: 96.3784% ( 7) 00:07:57.863 8.862 - 8.911: 96.3870% ( 1) 00:07:57.863 8.911 - 8.960: 96.4042% ( 2) 00:07:57.863 8.960 - 9.009: 96.4474% ( 5) 00:07:57.863 9.009 - 9.058: 96.4732% ( 3) 00:07:57.863 9.058 - 9.108: 96.4991% ( 3) 00:07:57.863 9.108 - 9.157: 96.5508% ( 6) 00:07:57.863 9.157 - 9.206: 96.5767% ( 3) 00:07:57.863 9.206 - 9.255: 96.6112% ( 4) 00:07:57.863 9.255 - 9.305: 96.6371% ( 3) 00:07:57.863 9.305 - 9.354: 96.6457% ( 1) 00:07:57.863 9.354 - 9.403: 96.6543% ( 1) 00:07:57.863 9.403 - 9.452: 96.6629% ( 1) 00:07:57.863 9.452 - 9.502: 96.6888% ( 3) 00:07:57.863 9.502 - 9.551: 96.6974% ( 1) 00:07:57.863 9.551 - 9.600: 96.7060% ( 1) 00:07:57.863 9.748 - 9.797: 96.7233% ( 2) 00:07:57.863 9.797 - 9.846: 96.7405% ( 2) 00:07:57.863 10.043 - 10.092: 96.7492% ( 1) 00:07:57.863 10.092 - 10.142: 96.7664% ( 2) 00:07:57.863 10.142 - 10.191: 96.7750% ( 1) 00:07:57.863 10.191 - 10.240: 96.7837% ( 1) 00:07:57.863 10.338 - 10.388: 96.7923% ( 1) 00:07:57.863 10.388 - 10.437: 96.8095% ( 2) 00:07:57.863 10.437 - 10.486: 96.8268% ( 2) 00:07:57.863 10.486 - 10.535: 96.8354% ( 1) 00:07:57.863 10.535 - 10.585: 96.8699% ( 4) 00:07:57.863 10.585 - 10.634: 96.8785% ( 1) 00:07:57.863 10.831 - 10.880: 96.9044% ( 3) 00:07:57.863 10.880 - 10.929: 96.9216% ( 2) 00:07:57.863 10.929 - 10.978: 96.9389% ( 2) 00:07:57.863 11.077 - 11.126: 96.9475% ( 1) 00:07:57.863 11.126 - 11.175: 96.9561% ( 1) 00:07:57.863 11.175 - 11.225: 96.9734% ( 2) 00:07:57.863 11.225 - 11.274: 97.0337% ( 7) 00:07:57.863 11.323 - 11.372: 97.0596% ( 3) 00:07:57.863 11.372 - 11.422: 97.1027% ( 5) 00:07:57.863 11.422 - 11.471: 97.1113% ( 1) 00:07:57.863 11.471 - 11.520: 97.1372% ( 3) 00:07:57.863 11.569 - 11.618: 97.2062% ( 8) 00:07:57.863 11.618 - 11.668: 97.3441% ( 16) 00:07:57.863 11.668 - 11.717: 97.5166% ( 20) 00:07:57.863 11.717 - 11.766: 97.6287% ( 13) 00:07:57.863 11.766 - 11.815: 97.7925% ( 19) 00:07:57.863 11.815 - 11.865: 97.8443% ( 6) 00:07:57.863 11.865 - 11.914: 97.9046% ( 7) 00:07:57.863 11.914 - 11.963: 97.9736% ( 8) 00:07:57.863 11.963 - 12.012: 98.0254% ( 6) 00:07:57.863 12.012 - 12.062: 98.0771% ( 6) 00:07:57.863 12.062 - 12.111: 98.1892% ( 13) 00:07:57.863 12.111 - 12.160: 98.2582% ( 8) 00:07:57.863 12.160 - 12.209: 98.3013% ( 5) 00:07:57.863 12.209 - 12.258: 98.3444% ( 5) 00:07:57.863 12.258 - 12.308: 98.3961% ( 6) 00:07:57.863 12.308 - 12.357: 98.4393% ( 5) 00:07:57.863 12.406 - 12.455: 98.4479% ( 1) 00:07:57.863 12.455 - 12.505: 98.4737% ( 3) 00:07:57.863 12.554 - 12.603: 98.4910% ( 2) 00:07:57.863 12.603 - 12.702: 98.5255% ( 4) 00:07:57.863 12.702 - 12.800: 98.5513% ( 3) 00:07:57.863 12.800 - 12.898: 98.5686% ( 2) 00:07:57.863 12.898 - 12.997: 98.6117% ( 5) 00:07:57.863 12.997 - 13.095: 98.6290% ( 2) 00:07:57.863 13.095 - 13.194: 98.6634% ( 4) 00:07:57.863 13.194 - 13.292: 98.7238% ( 7) 00:07:57.863 13.292 - 13.391: 98.7928% ( 8) 00:07:57.863 13.391 - 13.489: 98.8445% ( 6) 00:07:57.863 13.489 - 13.588: 98.9566% ( 13) 00:07:57.863 13.588 - 13.686: 99.0773% ( 14) 00:07:57.863 13.686 - 13.785: 99.1291% ( 6) 00:07:57.863 13.785 - 13.883: 99.2067% ( 9) 00:07:57.863 13.883 - 13.982: 99.3102% ( 12) 00:07:57.863 13.982 - 14.080: 99.3964% ( 10) 00:07:57.863 14.080 - 14.178: 99.4740% ( 9) 00:07:57.863 14.178 - 14.277: 99.5516% ( 9) 00:07:57.863 14.277 - 14.375: 99.5689% ( 2) 00:07:57.863 14.375 - 14.474: 99.5775% ( 1) 00:07:57.863 14.474 - 14.572: 99.6120% ( 4) 00:07:57.863 14.572 - 14.671: 99.6465% ( 4) 00:07:57.863 14.671 - 14.769: 99.6810% ( 4) 00:07:57.863 14.769 - 14.868: 99.6896% ( 1) 00:07:57.863 14.868 - 14.966: 99.7241% ( 4) 00:07:57.863 14.966 - 15.065: 99.7499% ( 3) 00:07:57.863 15.065 - 15.163: 99.7586% ( 1) 00:07:57.863 15.163 - 15.262: 99.7930% ( 4) 00:07:57.863 15.360 - 15.458: 99.8017% ( 1) 00:07:57.863 16.049 - 16.148: 99.8103% ( 1) 00:07:57.863 16.738 - 16.837: 99.8189% ( 1) 00:07:57.863 16.837 - 16.935: 99.8362% ( 2) 00:07:57.863 17.526 - 17.625: 99.8448% ( 1) 00:07:57.863 17.723 - 17.822: 99.8534% ( 1) 00:07:57.863 17.822 - 17.920: 99.8620% ( 1) 00:07:57.863 18.412 - 18.511: 99.8707% ( 1) 00:07:57.863 18.609 - 18.708: 99.8879% ( 2) 00:07:57.863 20.480 - 20.578: 99.8965% ( 1) 00:07:57.863 21.760 - 21.858: 99.9051% ( 1) 00:07:57.863 21.957 - 22.055: 99.9138% ( 1) 00:07:57.863 22.548 - 22.646: 99.9224% ( 1) 00:07:57.863 25.600 - 25.797: 99.9310% ( 1) 00:07:57.863 26.191 - 26.388: 99.9396% ( 1) 00:07:57.863 26.978 - 27.175: 99.9483% ( 1) 00:07:57.863 30.326 - 30.523: 99.9569% ( 1) 00:07:57.863 30.720 - 30.917: 99.9655% ( 1) 00:07:57.863 32.295 - 32.492: 99.9741% ( 1) 00:07:57.863 40.369 - 40.566: 99.9828% ( 1) 00:07:57.863 66.954 - 67.348: 99.9914% ( 1) 00:07:57.863 894.818 - 901.120: 100.0000% ( 1) 00:07:57.863 00:07:57.863 00:07:57.863 real 0m1.179s 00:07:57.863 user 0m1.065s 00:07:57.863 sys 0m0.073s 00:07:57.863 18:02:31 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.863 18:02:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:57.863 ************************************ 00:07:57.863 END TEST nvme_overhead 00:07:57.863 ************************************ 00:07:57.863 18:02:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:57.863 18:02:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:57.863 18:02:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.863 18:02:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.863 ************************************ 00:07:57.863 START TEST nvme_arbitration 00:07:57.863 ************************************ 00:07:57.863 18:02:31 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:01.166 Initializing NVMe Controllers 00:08:01.166 Attached to 0000:00:10.0 00:08:01.166 Attached to 0000:00:11.0 00:08:01.166 Attached to 0000:00:13.0 00:08:01.166 Attached to 0000:00:12.0 00:08:01.166 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:01.166 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:01.166 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:01.166 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:01.166 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:01.166 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:01.166 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:01.166 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:01.166 Initialization complete. Launching workers. 00:08:01.166 Starting thread on core 1 with urgent priority queue 00:08:01.166 Starting thread on core 2 with urgent priority queue 00:08:01.166 Starting thread on core 3 with urgent priority queue 00:08:01.166 Starting thread on core 0 with urgent priority queue 00:08:01.166 QEMU NVMe Ctrl (12340 ) core 0: 6848.00 IO/s 14.60 secs/100000 ios 00:08:01.166 QEMU NVMe Ctrl (12342 ) core 0: 6848.00 IO/s 14.60 secs/100000 ios 00:08:01.166 QEMU NVMe Ctrl (12341 ) core 1: 6954.67 IO/s 14.38 secs/100000 ios 00:08:01.166 QEMU NVMe Ctrl (12342 ) core 1: 6954.67 IO/s 14.38 secs/100000 ios 00:08:01.166 QEMU NVMe Ctrl (12343 ) core 2: 6784.00 IO/s 14.74 secs/100000 ios 00:08:01.166 QEMU NVMe Ctrl (12342 ) core 3: 6997.33 IO/s 14.29 secs/100000 ios 00:08:01.166 ======================================================== 00:08:01.166 00:08:01.166 00:08:01.166 real 0m3.208s 00:08:01.166 user 0m9.043s 00:08:01.166 sys 0m0.090s 00:08:01.166 18:02:35 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.166 ************************************ 00:08:01.166 END TEST nvme_arbitration 00:08:01.166 ************************************ 00:08:01.166 18:02:35 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:01.166 18:02:35 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.166 18:02:35 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.166 18:02:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.166 18:02:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.166 ************************************ 00:08:01.166 START TEST nvme_single_aen 00:08:01.166 ************************************ 00:08:01.166 18:02:35 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.166 Asynchronous Event Request test 00:08:01.166 Attached to 0000:00:10.0 00:08:01.166 Attached to 0000:00:11.0 00:08:01.166 Attached to 0000:00:13.0 00:08:01.166 Attached to 0000:00:12.0 00:08:01.166 Reset controller to setup AER completions for this process 00:08:01.166 Registering asynchronous event callbacks... 00:08:01.166 Getting orig temperature thresholds of all controllers 00:08:01.166 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.166 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.166 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.166 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.166 Setting all controllers temperature threshold low to trigger AER 00:08:01.166 Waiting for all controllers temperature threshold to be set lower 00:08:01.166 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.166 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:01.166 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.166 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:01.166 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.166 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:01.166 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.166 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:01.166 Waiting for all controllers to trigger AER and reset threshold 00:08:01.166 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.166 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.166 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.166 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.166 Cleaning up... 00:08:01.166 00:08:01.166 real 0m0.186s 00:08:01.166 user 0m0.071s 00:08:01.166 sys 0m0.072s 00:08:01.166 18:02:35 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.166 ************************************ 00:08:01.166 18:02:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:01.166 END TEST nvme_single_aen 00:08:01.166 ************************************ 00:08:01.166 18:02:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:01.166 18:02:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.166 18:02:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.166 18:02:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.166 ************************************ 00:08:01.166 START TEST nvme_doorbell_aers 00:08:01.166 ************************************ 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.166 18:02:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:01.428 [2024-12-13 18:02:35.678571] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:11.434 Executing: test_write_invalid_db 00:08:11.434 Waiting for AER completion... 00:08:11.434 Failure: test_write_invalid_db 00:08:11.434 00:08:11.434 Executing: test_invalid_db_write_overflow_sq 00:08:11.434 Waiting for AER completion... 00:08:11.434 Failure: test_invalid_db_write_overflow_sq 00:08:11.434 00:08:11.434 Executing: test_invalid_db_write_overflow_cq 00:08:11.434 Waiting for AER completion... 00:08:11.434 Failure: test_invalid_db_write_overflow_cq 00:08:11.434 00:08:11.434 18:02:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:11.434 18:02:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:11.434 [2024-12-13 18:02:45.682410] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:21.428 Executing: test_write_invalid_db 00:08:21.428 Waiting for AER completion... 00:08:21.428 Failure: test_write_invalid_db 00:08:21.428 00:08:21.428 Executing: test_invalid_db_write_overflow_sq 00:08:21.428 Waiting for AER completion... 00:08:21.428 Failure: test_invalid_db_write_overflow_sq 00:08:21.428 00:08:21.428 Executing: test_invalid_db_write_overflow_cq 00:08:21.428 Waiting for AER completion... 00:08:21.428 Failure: test_invalid_db_write_overflow_cq 00:08:21.428 00:08:21.428 18:02:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:21.428 18:02:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:21.428 [2024-12-13 18:02:55.696383] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:31.417 Executing: test_write_invalid_db 00:08:31.417 Waiting for AER completion... 00:08:31.417 Failure: test_write_invalid_db 00:08:31.417 00:08:31.417 Executing: test_invalid_db_write_overflow_sq 00:08:31.417 Waiting for AER completion... 00:08:31.417 Failure: test_invalid_db_write_overflow_sq 00:08:31.417 00:08:31.417 Executing: test_invalid_db_write_overflow_cq 00:08:31.417 Waiting for AER completion... 00:08:31.417 Failure: test_invalid_db_write_overflow_cq 00:08:31.417 00:08:31.417 18:03:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:31.417 18:03:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:31.417 [2024-12-13 18:03:05.747935] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.393 Executing: test_write_invalid_db 00:08:41.393 Waiting for AER completion... 00:08:41.393 Failure: test_write_invalid_db 00:08:41.393 00:08:41.393 Executing: test_invalid_db_write_overflow_sq 00:08:41.393 Waiting for AER completion... 00:08:41.393 Failure: test_invalid_db_write_overflow_sq 00:08:41.393 00:08:41.393 Executing: test_invalid_db_write_overflow_cq 00:08:41.393 Waiting for AER completion... 00:08:41.393 Failure: test_invalid_db_write_overflow_cq 00:08:41.393 00:08:41.393 00:08:41.393 real 0m40.163s 00:08:41.393 user 0m34.283s 00:08:41.394 sys 0m5.526s 00:08:41.394 18:03:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.394 ************************************ 00:08:41.394 END TEST nvme_doorbell_aers 00:08:41.394 ************************************ 00:08:41.394 18:03:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:41.394 18:03:15 nvme -- nvme/nvme.sh@97 -- # uname 00:08:41.394 18:03:15 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:41.394 18:03:15 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:41.394 18:03:15 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:41.394 18:03:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.394 18:03:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.394 ************************************ 00:08:41.394 START TEST nvme_multi_aen 00:08:41.394 ************************************ 00:08:41.394 18:03:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:41.652 [2024-12-13 18:03:15.786317] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.786771] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.786847] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.788033] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.788128] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.788171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.789187] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.789281] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.789320] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.790420] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.790579] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 [2024-12-13 18:03:15.790637] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76495) is not found. Dropping the request. 00:08:41.652 Child process pid: 77016 00:08:41.652 [Child] Asynchronous Event Request test 00:08:41.652 [Child] Attached to 0000:00:10.0 00:08:41.652 [Child] Attached to 0000:00:11.0 00:08:41.652 [Child] Attached to 0000:00:13.0 00:08:41.652 [Child] Attached to 0000:00:12.0 00:08:41.652 [Child] Registering asynchronous event callbacks... 00:08:41.652 [Child] Getting orig temperature thresholds of all controllers 00:08:41.652 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.652 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.652 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.652 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.652 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:41.652 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.652 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.652 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.652 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.652 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 [Child] Cleaning up... 00:08:41.653 Asynchronous Event Request test 00:08:41.653 Attached to 0000:00:10.0 00:08:41.653 Attached to 0000:00:11.0 00:08:41.653 Attached to 0000:00:13.0 00:08:41.653 Attached to 0000:00:12.0 00:08:41.653 Reset controller to setup AER completions for this process 00:08:41.653 Registering asynchronous event callbacks... 00:08:41.653 Getting orig temperature thresholds of all controllers 00:08:41.653 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.653 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.653 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.653 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.653 Setting all controllers temperature threshold low to trigger AER 00:08:41.653 Waiting for all controllers temperature threshold to be set lower 00:08:41.653 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.653 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:41.653 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.653 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:41.653 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.653 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:41.653 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.653 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:41.653 Waiting for all controllers to trigger AER and reset threshold 00:08:41.653 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.653 Cleaning up... 00:08:41.653 00:08:41.653 real 0m0.374s 00:08:41.653 user 0m0.125s 00:08:41.653 sys 0m0.134s 00:08:41.653 18:03:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.653 18:03:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:41.653 ************************************ 00:08:41.653 END TEST nvme_multi_aen 00:08:41.653 ************************************ 00:08:41.911 18:03:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:41.911 18:03:16 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.911 18:03:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.911 18:03:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.911 ************************************ 00:08:41.911 START TEST nvme_startup 00:08:41.911 ************************************ 00:08:41.911 18:03:16 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:41.911 Initializing NVMe Controllers 00:08:41.911 Attached to 0000:00:10.0 00:08:41.911 Attached to 0000:00:11.0 00:08:41.911 Attached to 0000:00:13.0 00:08:41.911 Attached to 0000:00:12.0 00:08:41.911 Initialization complete. 00:08:41.911 Time used:116698.406 (us). 00:08:41.911 ************************************ 00:08:41.911 END TEST nvme_startup 00:08:41.911 ************************************ 00:08:41.911 00:08:41.911 real 0m0.166s 00:08:41.911 user 0m0.057s 00:08:41.911 sys 0m0.072s 00:08:41.911 18:03:16 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.911 18:03:16 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:41.911 18:03:16 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:41.911 18:03:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.911 18:03:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.911 18:03:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.911 ************************************ 00:08:41.911 START TEST nvme_multi_secondary 00:08:41.911 ************************************ 00:08:41.911 18:03:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:41.911 18:03:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77066 00:08:41.911 18:03:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77067 00:08:41.911 18:03:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:41.911 18:03:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:41.911 18:03:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:45.283 Initializing NVMe Controllers 00:08:45.283 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.283 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.283 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.283 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.283 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:45.283 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:45.283 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:45.283 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:45.283 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:45.283 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:45.283 Initialization complete. Launching workers. 00:08:45.283 ======================================================== 00:08:45.283 Latency(us) 00:08:45.283 Device Information : IOPS MiB/s Average min max 00:08:45.283 PCIE (0000:00:10.0) NSID 1 from core 2: 3174.78 12.40 5038.00 850.48 12723.72 00:08:45.283 PCIE (0000:00:11.0) NSID 1 from core 2: 3174.78 12.40 5039.56 877.00 13059.11 00:08:45.283 PCIE (0000:00:13.0) NSID 1 from core 2: 3174.78 12.40 5039.82 824.91 13798.73 00:08:45.283 PCIE (0000:00:12.0) NSID 1 from core 2: 3174.78 12.40 5040.28 828.63 12961.55 00:08:45.283 PCIE (0000:00:12.0) NSID 2 from core 2: 3174.78 12.40 5040.48 815.63 12819.35 00:08:45.283 PCIE (0000:00:12.0) NSID 3 from core 2: 3174.78 12.40 5046.84 810.71 13064.68 00:08:45.283 ======================================================== 00:08:45.283 Total : 19048.69 74.41 5040.83 810.71 13798.73 00:08:45.283 00:08:45.283 18:03:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77066 00:08:45.283 Initializing NVMe Controllers 00:08:45.283 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.283 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.283 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.283 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.283 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:45.283 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:45.283 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:45.283 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:45.283 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:45.283 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:45.283 Initialization complete. Launching workers. 00:08:45.283 ======================================================== 00:08:45.283 Latency(us) 00:08:45.283 Device Information : IOPS MiB/s Average min max 00:08:45.283 PCIE (0000:00:10.0) NSID 1 from core 1: 7361.71 28.76 2171.90 782.82 6712.47 00:08:45.283 PCIE (0000:00:11.0) NSID 1 from core 1: 7361.71 28.76 2172.96 812.65 7370.77 00:08:45.283 PCIE (0000:00:13.0) NSID 1 from core 1: 7361.71 28.76 2172.95 809.13 7185.43 00:08:45.283 PCIE (0000:00:12.0) NSID 1 from core 1: 7361.71 28.76 2172.94 822.34 6969.87 00:08:45.283 PCIE (0000:00:12.0) NSID 2 from core 1: 7361.71 28.76 2172.90 814.89 6534.05 00:08:45.283 PCIE (0000:00:12.0) NSID 3 from core 1: 7361.71 28.76 2172.93 814.65 6373.08 00:08:45.283 ======================================================== 00:08:45.283 Total : 44170.28 172.54 2172.77 782.82 7370.77 00:08:45.283 00:08:47.180 Initializing NVMe Controllers 00:08:47.180 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.180 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.180 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.180 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.180 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:47.180 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:47.180 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:47.180 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:47.180 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:47.180 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:47.180 Initialization complete. Launching workers. 00:08:47.180 ======================================================== 00:08:47.180 Latency(us) 00:08:47.180 Device Information : IOPS MiB/s Average min max 00:08:47.180 PCIE (0000:00:10.0) NSID 1 from core 0: 10968.90 42.85 1457.42 704.10 7120.47 00:08:47.180 PCIE (0000:00:11.0) NSID 1 from core 0: 10968.90 42.85 1458.26 725.23 6616.67 00:08:47.180 PCIE (0000:00:13.0) NSID 1 from core 0: 10968.90 42.85 1458.23 717.54 6527.18 00:08:47.180 PCIE (0000:00:12.0) NSID 1 from core 0: 10968.90 42.85 1458.20 726.56 6493.00 00:08:47.180 PCIE (0000:00:12.0) NSID 2 from core 0: 10968.90 42.85 1458.17 689.55 7380.04 00:08:47.180 PCIE (0000:00:12.0) NSID 3 from core 0: 10968.90 42.85 1458.14 564.13 6997.76 00:08:47.180 ======================================================== 00:08:47.180 Total : 65813.43 257.08 1458.07 564.13 7380.04 00:08:47.180 00:08:47.180 18:03:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77067 00:08:47.180 18:03:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77142 00:08:47.180 18:03:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:47.180 18:03:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77143 00:08:47.180 18:03:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:47.180 18:03:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:50.458 Initializing NVMe Controllers 00:08:50.458 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.458 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.458 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.458 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.458 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:50.458 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:50.458 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:50.458 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:50.458 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:50.458 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:50.458 Initialization complete. Launching workers. 00:08:50.458 ======================================================== 00:08:50.458 Latency(us) 00:08:50.458 Device Information : IOPS MiB/s Average min max 00:08:50.458 PCIE (0000:00:10.0) NSID 1 from core 0: 7746.75 30.26 2064.06 717.14 5643.90 00:08:50.458 PCIE (0000:00:11.0) NSID 1 from core 0: 7746.75 30.26 2064.93 737.09 6154.19 00:08:50.458 PCIE (0000:00:13.0) NSID 1 from core 0: 7746.75 30.26 2064.97 744.50 5716.02 00:08:50.458 PCIE (0000:00:12.0) NSID 1 from core 0: 7746.75 30.26 2064.92 754.47 5701.02 00:08:50.458 PCIE (0000:00:12.0) NSID 2 from core 0: 7746.75 30.26 2064.78 757.15 5771.81 00:08:50.458 PCIE (0000:00:12.0) NSID 3 from core 0: 7746.75 30.26 2064.79 646.46 6065.31 00:08:50.458 ======================================================== 00:08:50.458 Total : 46480.49 181.56 2064.74 646.46 6154.19 00:08:50.458 00:08:50.458 Initializing NVMe Controllers 00:08:50.458 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.458 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.458 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.458 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.458 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:50.458 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:50.458 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:50.458 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:50.458 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:50.458 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:50.458 Initialization complete. Launching workers. 00:08:50.458 ======================================================== 00:08:50.458 Latency(us) 00:08:50.458 Device Information : IOPS MiB/s Average min max 00:08:50.458 PCIE (0000:00:10.0) NSID 1 from core 1: 7433.16 29.04 2151.05 710.31 6131.25 00:08:50.458 PCIE (0000:00:11.0) NSID 1 from core 1: 7433.16 29.04 2152.10 751.27 6192.19 00:08:50.458 PCIE (0000:00:13.0) NSID 1 from core 1: 7433.16 29.04 2152.06 761.77 5786.43 00:08:50.458 PCIE (0000:00:12.0) NSID 1 from core 1: 7433.16 29.04 2152.01 756.48 6088.81 00:08:50.458 PCIE (0000:00:12.0) NSID 2 from core 1: 7433.16 29.04 2151.95 759.59 6436.26 00:08:50.458 PCIE (0000:00:12.0) NSID 3 from core 1: 7433.16 29.04 2151.89 738.84 6024.48 00:08:50.458 ======================================================== 00:08:50.458 Total : 44598.96 174.21 2151.84 710.31 6436.26 00:08:50.458 00:08:53.004 Initializing NVMe Controllers 00:08:53.004 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.004 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.004 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.004 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.004 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:53.004 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:53.004 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:53.004 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:53.004 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:53.004 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:53.004 Initialization complete. Launching workers. 00:08:53.004 ======================================================== 00:08:53.004 Latency(us) 00:08:53.004 Device Information : IOPS MiB/s Average min max 00:08:53.004 PCIE (0000:00:10.0) NSID 1 from core 2: 4735.17 18.50 3377.44 726.56 13344.42 00:08:53.004 PCIE (0000:00:11.0) NSID 1 from core 2: 4735.17 18.50 3378.54 731.30 13137.67 00:08:53.004 PCIE (0000:00:13.0) NSID 1 from core 2: 4735.17 18.50 3378.30 744.31 12868.21 00:08:53.004 PCIE (0000:00:12.0) NSID 1 from core 2: 4735.17 18.50 3378.39 692.95 13223.67 00:08:53.004 PCIE (0000:00:12.0) NSID 2 from core 2: 4735.17 18.50 3377.96 579.52 13042.84 00:08:53.004 PCIE (0000:00:12.0) NSID 3 from core 2: 4735.17 18.50 3377.03 495.09 12821.75 00:08:53.004 ======================================================== 00:08:53.004 Total : 28411.02 110.98 3377.94 495.09 13344.42 00:08:53.004 00:08:53.004 ************************************ 00:08:53.004 END TEST nvme_multi_secondary 00:08:53.004 ************************************ 00:08:53.004 18:03:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77142 00:08:53.004 18:03:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77143 00:08:53.004 00:08:53.004 real 0m10.671s 00:08:53.004 user 0m18.263s 00:08:53.004 sys 0m0.512s 00:08:53.004 18:03:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.004 18:03:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 18:03:26 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:53.004 18:03:26 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:53.004 18:03:26 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76110 ]] 00:08:53.004 18:03:26 nvme -- common/autotest_common.sh@1094 -- # kill 76110 00:08:53.004 18:03:26 nvme -- common/autotest_common.sh@1095 -- # wait 76110 00:08:53.004 [2024-12-13 18:03:26.962962] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.963021] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.963039] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.963068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.963892] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.963961] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.963987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.964015] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.964764] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.964874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.964903] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.964937] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.965557] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.965698] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.965716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 [2024-12-13 18:03:26.965728] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:08:53.004 18:03:27 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:53.004 18:03:27 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:53.004 18:03:27 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.004 18:03:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.004 18:03:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.004 18:03:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 ************************************ 00:08:53.004 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:53.004 ************************************ 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.004 * Looking for test storage... 00:08:53.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.004 --rc genhtml_branch_coverage=1 00:08:53.004 --rc genhtml_function_coverage=1 00:08:53.004 --rc genhtml_legend=1 00:08:53.004 --rc geninfo_all_blocks=1 00:08:53.004 --rc geninfo_unexecuted_blocks=1 00:08:53.004 00:08:53.004 ' 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.004 --rc genhtml_branch_coverage=1 00:08:53.004 --rc genhtml_function_coverage=1 00:08:53.004 --rc genhtml_legend=1 00:08:53.004 --rc geninfo_all_blocks=1 00:08:53.004 --rc geninfo_unexecuted_blocks=1 00:08:53.004 00:08:53.004 ' 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.004 --rc genhtml_branch_coverage=1 00:08:53.004 --rc genhtml_function_coverage=1 00:08:53.004 --rc genhtml_legend=1 00:08:53.004 --rc geninfo_all_blocks=1 00:08:53.004 --rc geninfo_unexecuted_blocks=1 00:08:53.004 00:08:53.004 ' 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.004 --rc genhtml_branch_coverage=1 00:08:53.004 --rc genhtml_function_coverage=1 00:08:53.004 --rc genhtml_legend=1 00:08:53.004 --rc geninfo_all_blocks=1 00:08:53.004 --rc geninfo_unexecuted_blocks=1 00:08:53.004 00:08:53.004 ' 00:08:53.004 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:53.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77298 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77298 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77298 ']' 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.005 18:03:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.005 [2024-12-13 18:03:27.328603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:53.005 [2024-12-13 18:03:27.328833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77298 ] 00:08:53.263 [2024-12-13 18:03:27.484363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.263 [2024-12-13 18:03:27.506720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.263 [2024-12-13 18:03:27.507049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.263 [2024-12-13 18:03:27.507222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.263 [2024-12-13 18:03:27.507321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.830 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.830 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:53.830 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:53.830 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.830 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.089 nvme0n1 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_h8YrW.txt 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.089 true 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734113008 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77321 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:54.089 18:03:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.989 [2024-12-13 18:03:30.251301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:55.989 [2024-12-13 18:03:30.251642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:55.989 [2024-12-13 18:03:30.251730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:55.989 [2024-12-13 18:03:30.251782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.989 [2024-12-13 18:03:30.253144] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:55.989 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77321 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77321 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77321 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_h8YrW.txt 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:55.989 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_h8YrW.txt 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77298 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77298 ']' 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77298 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77298 00:08:55.990 killing process with pid 77298 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77298' 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77298 00:08:55.990 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77298 00:08:56.248 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:56.248 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:56.248 ************************************ 00:08:56.248 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:56.248 ************************************ 00:08:56.248 00:08:56.248 real 0m3.544s 00:08:56.248 user 0m12.637s 00:08:56.248 sys 0m0.454s 00:08:56.248 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.248 18:03:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 18:03:30 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:56.248 18:03:30 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:56.248 18:03:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.248 18:03:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.248 18:03:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.506 ************************************ 00:08:56.506 START TEST nvme_fio 00:08:56.506 ************************************ 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:56.506 18:03:30 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:56.506 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:56.765 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:56.765 18:03:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:56.765 18:03:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:56.765 18:03:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:56.765 18:03:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.023 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.023 fio-3.35 00:08:57.023 Starting 1 thread 00:09:03.645 00:09:03.646 test: (groupid=0, jobs=1): err= 0: pid=77444: Fri Dec 13 18:03:36 2024 00:09:03.646 read: IOPS=21.1k, BW=82.5MiB/s (86.6MB/s)(165MiB/2001msec) 00:09:03.646 slat (usec): min=3, max=193, avg= 5.15, stdev= 2.70 00:09:03.646 clat (usec): min=191, max=12064, avg=3027.69, stdev=1172.24 00:09:03.646 lat (usec): min=195, max=12103, avg=3032.84, stdev=1173.63 00:09:03.646 clat percentiles (usec): 00:09:03.646 | 1.00th=[ 1598], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:03.646 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:03.646 | 70.00th=[ 2900], 80.00th=[ 3458], 90.00th=[ 4817], 95.00th=[ 5735], 00:09:03.646 | 99.00th=[ 7177], 99.50th=[ 7832], 99.90th=[ 8848], 99.95th=[10159], 00:09:03.646 | 99.99th=[11863] 00:09:03.646 bw ( KiB/s): min=77864, max=84296, per=94.98%, avg=80282.67, stdev=3499.98, samples=3 00:09:03.646 iops : min=19466, max=21074, avg=20070.67, stdev=875.00, samples=3 00:09:03.646 write: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(164MiB/2001msec); 0 zone resets 00:09:03.646 slat (nsec): min=3428, max=70449, avg=5351.36, stdev=2528.74 00:09:03.646 clat (usec): min=211, max=11966, avg=3028.50, stdev=1172.56 00:09:03.646 lat (usec): min=215, max=11982, avg=3033.86, stdev=1173.91 00:09:03.646 clat percentiles (usec): 00:09:03.646 | 1.00th=[ 1565], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:03.646 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:03.646 | 70.00th=[ 2900], 80.00th=[ 3458], 90.00th=[ 4883], 95.00th=[ 5735], 00:09:03.646 | 99.00th=[ 7242], 99.50th=[ 7832], 99.90th=[ 9110], 99.95th=[10290], 00:09:03.646 | 99.99th=[11731] 00:09:03.646 bw ( KiB/s): min=78160, max=84248, per=95.61%, avg=80317.33, stdev=3409.47, samples=3 00:09:03.646 iops : min=19540, max=21062, avg=20079.33, stdev=852.37, samples=3 00:09:03.646 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.06% 00:09:03.646 lat (msec) : 2=3.17%, 4=80.92%, 10=15.76%, 20=0.06% 00:09:03.646 cpu : usr=99.00%, sys=0.15%, ctx=8, majf=0, minf=625 00:09:03.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:03.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.646 issued rwts: total=42282,42025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.646 00:09:03.646 Run status group 0 (all jobs): 00:09:03.646 READ: bw=82.5MiB/s (86.6MB/s), 82.5MiB/s-82.5MiB/s (86.6MB/s-86.6MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:03.646 WRITE: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:03.646 ----------------------------------------------------- 00:09:03.646 Suppressions used: 00:09:03.646 count bytes template 00:09:03.646 1 32 /usr/src/fio/parse.c 00:09:03.646 1 8 libtcmalloc_minimal.so 00:09:03.646 ----------------------------------------------------- 00:09:03.646 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:03.646 18:03:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.646 18:03:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.646 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.646 fio-3.35 00:09:03.646 Starting 1 thread 00:09:10.214 00:09:10.214 test: (groupid=0, jobs=1): err= 0: pid=77499: Fri Dec 13 18:03:43 2024 00:09:10.214 read: IOPS=20.3k, BW=79.3MiB/s (83.1MB/s)(159MiB/2001msec) 00:09:10.214 slat (nsec): min=3248, max=79013, avg=5137.65, stdev=2538.51 00:09:10.214 clat (usec): min=710, max=12771, avg=3144.62, stdev=1062.04 00:09:10.214 lat (usec): min=714, max=12808, avg=3149.76, stdev=1063.10 00:09:10.214 clat percentiles (usec): 00:09:10.214 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:10.214 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2900], 00:09:10.214 | 70.00th=[ 3130], 80.00th=[ 3687], 90.00th=[ 4752], 95.00th=[ 5473], 00:09:10.214 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 8356], 99.95th=[10552], 00:09:10.214 | 99.99th=[12649] 00:09:10.214 bw ( KiB/s): min=78392, max=80792, per=97.67%, avg=79309.33, stdev=1296.03, samples=3 00:09:10.214 iops : min=19598, max=20198, avg=19827.33, stdev=324.01, samples=3 00:09:10.214 write: IOPS=20.3k, BW=79.1MiB/s (83.0MB/s)(158MiB/2001msec); 0 zone resets 00:09:10.214 slat (nsec): min=3336, max=92558, avg=5264.68, stdev=2405.43 00:09:10.214 clat (usec): min=788, max=12701, avg=3147.24, stdev=1050.15 00:09:10.214 lat (usec): min=792, max=12717, avg=3152.50, stdev=1051.13 00:09:10.214 clat percentiles (usec): 00:09:10.214 | 1.00th=[ 1844], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:10.214 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2933], 00:09:10.214 | 70.00th=[ 3130], 80.00th=[ 3687], 90.00th=[ 4752], 95.00th=[ 5473], 00:09:10.214 | 99.00th=[ 6718], 99.50th=[ 7177], 99.90th=[ 8717], 99.95th=[10683], 00:09:10.214 | 99.99th=[12518] 00:09:10.214 bw ( KiB/s): min=78616, max=80600, per=97.90%, avg=79333.33, stdev=1100.18, samples=3 00:09:10.214 iops : min=19654, max=20150, avg=19833.33, stdev=275.04, samples=3 00:09:10.214 lat (usec) : 750=0.01%, 1000=0.04% 00:09:10.214 lat (msec) : 2=1.46%, 4=81.62%, 10=16.80%, 20=0.08% 00:09:10.214 cpu : usr=99.05%, sys=0.10%, ctx=3, majf=0, minf=625 00:09:10.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:10.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.214 issued rwts: total=40620,40536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.214 00:09:10.214 Run status group 0 (all jobs): 00:09:10.214 READ: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:09:10.214 WRITE: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=158MiB (166MB), run=2001-2001msec 00:09:10.214 ----------------------------------------------------- 00:09:10.214 Suppressions used: 00:09:10.214 count bytes template 00:09:10.214 1 32 /usr/src/fio/parse.c 00:09:10.214 1 8 libtcmalloc_minimal.so 00:09:10.214 ----------------------------------------------------- 00:09:10.214 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:10.214 18:03:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:10.214 18:03:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:10.214 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:10.214 fio-3.35 00:09:10.214 Starting 1 thread 00:09:16.793 00:09:16.793 test: (groupid=0, jobs=1): err= 0: pid=77554: Fri Dec 13 18:03:50 2024 00:09:16.793 read: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(172MiB/2001msec) 00:09:16.793 slat (nsec): min=3524, max=63089, avg=5041.26, stdev=2176.36 00:09:16.793 clat (usec): min=175, max=9351, avg=2903.69, stdev=939.59 00:09:16.793 lat (usec): min=179, max=9370, avg=2908.73, stdev=940.70 00:09:16.793 clat percentiles (usec): 00:09:16.793 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2409], 00:09:16.793 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:16.793 | 70.00th=[ 2769], 80.00th=[ 3064], 90.00th=[ 4080], 95.00th=[ 5211], 00:09:16.793 | 99.00th=[ 6456], 99.50th=[ 7177], 99.90th=[ 8356], 99.95th=[ 8455], 00:09:16.793 | 99.99th=[ 8979] 00:09:16.793 bw ( KiB/s): min=84680, max=95248, per=100.00%, avg=90338.67, stdev=5323.70, samples=3 00:09:16.793 iops : min=21170, max=23812, avg=22584.67, stdev=1330.92, samples=3 00:09:16.793 write: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec); 0 zone resets 00:09:16.793 slat (nsec): min=3781, max=63312, avg=5229.74, stdev=2203.42 00:09:16.793 clat (usec): min=189, max=9216, avg=2916.64, stdev=942.86 00:09:16.793 lat (usec): min=193, max=9232, avg=2921.87, stdev=943.94 00:09:16.793 clat percentiles (usec): 00:09:16.793 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:16.793 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:16.793 | 70.00th=[ 2802], 80.00th=[ 3097], 90.00th=[ 4113], 95.00th=[ 5211], 00:09:16.793 | 99.00th=[ 6456], 99.50th=[ 7242], 99.90th=[ 8356], 99.95th=[ 8586], 00:09:16.793 | 99.99th=[ 9110] 00:09:16.793 bw ( KiB/s): min=86728, max=94400, per=100.00%, avg=90576.00, stdev=3836.06, samples=3 00:09:16.793 iops : min=21682, max=23600, avg=22644.00, stdev=959.01, samples=3 00:09:16.793 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:16.793 lat (msec) : 2=1.15%, 4=88.20%, 10=10.59% 00:09:16.793 cpu : usr=99.10%, sys=0.00%, ctx=5, majf=0, minf=626 00:09:16.793 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.793 issued rwts: total=44013,43734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.793 00:09:16.793 Run status group 0 (all jobs): 00:09:16.793 READ: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=172MiB (180MB), run=2001-2001msec 00:09:16.793 WRITE: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:09:16.793 ----------------------------------------------------- 00:09:16.793 Suppressions used: 00:09:16.793 count bytes template 00:09:16.793 1 32 /usr/src/fio/parse.c 00:09:16.793 1 8 libtcmalloc_minimal.so 00:09:16.793 ----------------------------------------------------- 00:09:16.793 00:09:16.793 18:03:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.793 18:03:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.793 18:03:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.793 18:03:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.793 18:03:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.793 18:03:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.793 18:03:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.793 18:03:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.793 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.793 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.793 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.793 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.793 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.793 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.793 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.794 18:03:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:17.054 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:17.054 fio-3.35 00:09:17.054 Starting 1 thread 00:09:23.619 00:09:23.619 test: (groupid=0, jobs=1): err= 0: pid=77609: Fri Dec 13 18:03:56 2024 00:09:23.619 read: IOPS=24.8k, BW=97.0MiB/s (102MB/s)(194MiB/2001msec) 00:09:23.619 slat (usec): min=4, max=465, avg= 4.82, stdev= 2.89 00:09:23.619 clat (usec): min=209, max=9428, avg=2571.33, stdev=714.44 00:09:23.619 lat (usec): min=214, max=9462, avg=2576.15, stdev=715.61 00:09:23.619 clat percentiles (usec): 00:09:23.619 | 1.00th=[ 1876], 5.00th=[ 2073], 10.00th=[ 2147], 20.00th=[ 2278], 00:09:23.619 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:23.619 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2835], 95.00th=[ 3654], 00:09:23.619 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 7832], 99.95th=[ 7898], 00:09:23.619 | 99.99th=[ 9110] 00:09:23.619 bw ( KiB/s): min=91624, max=102808, per=98.68%, avg=98029.33, stdev=5766.71, samples=3 00:09:23.619 iops : min=22906, max=25702, avg=24507.33, stdev=1441.68, samples=3 00:09:23.619 write: IOPS=24.7k, BW=96.4MiB/s (101MB/s)(193MiB/2001msec); 0 zone resets 00:09:23.619 slat (usec): min=4, max=220, avg= 5.12, stdev= 2.17 00:09:23.619 clat (usec): min=224, max=9227, avg=2581.15, stdev=721.07 00:09:23.619 lat (usec): min=229, max=9239, avg=2586.27, stdev=722.27 00:09:23.619 clat percentiles (usec): 00:09:23.619 | 1.00th=[ 1893], 5.00th=[ 2089], 10.00th=[ 2180], 20.00th=[ 2278], 00:09:23.619 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:23.619 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2868], 95.00th=[ 3720], 00:09:23.619 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 7635], 99.95th=[ 7832], 00:09:23.619 | 99.99th=[ 8979] 00:09:23.619 bw ( KiB/s): min=91912, max=102480, per=99.27%, avg=98008.00, stdev=5467.97, samples=3 00:09:23.619 iops : min=22978, max=25620, avg=24502.00, stdev=1366.99, samples=3 00:09:23.619 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:23.619 lat (msec) : 2=2.44%, 4=93.13%, 10=4.40% 00:09:23.619 cpu : usr=98.75%, sys=0.30%, ctx=28, majf=0, minf=625 00:09:23.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:23.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:23.619 issued rwts: total=49693,49390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:23.619 00:09:23.619 Run status group 0 (all jobs): 00:09:23.619 READ: bw=97.0MiB/s (102MB/s), 97.0MiB/s-97.0MiB/s (102MB/s-102MB/s), io=194MiB (204MB), run=2001-2001msec 00:09:23.619 WRITE: bw=96.4MiB/s (101MB/s), 96.4MiB/s-96.4MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:09:23.619 ----------------------------------------------------- 00:09:23.619 Suppressions used: 00:09:23.619 count bytes template 00:09:23.619 1 32 /usr/src/fio/parse.c 00:09:23.619 1 8 libtcmalloc_minimal.so 00:09:23.619 ----------------------------------------------------- 00:09:23.619 00:09:23.619 18:03:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:23.619 18:03:57 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:23.619 00:09:23.619 real 0m26.566s 00:09:23.619 user 0m17.251s 00:09:23.619 sys 0m15.958s 00:09:23.619 18:03:57 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.619 18:03:57 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:23.619 ************************************ 00:09:23.619 END TEST nvme_fio 00:09:23.619 ************************************ 00:09:23.619 00:09:23.619 real 1m33.513s 00:09:23.619 user 3m31.779s 00:09:23.619 sys 0m25.595s 00:09:23.619 18:03:57 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.619 ************************************ 00:09:23.619 END TEST nvme 00:09:23.619 ************************************ 00:09:23.619 18:03:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.619 18:03:57 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:23.619 18:03:57 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:23.619 18:03:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.619 18:03:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.619 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.619 ************************************ 00:09:23.619 START TEST nvme_scc 00:09:23.619 ************************************ 00:09:23.619 18:03:57 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:23.619 * Looking for test storage... 00:09:23.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:23.619 18:03:57 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.619 18:03:57 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.619 18:03:57 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.619 18:03:57 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:23.619 18:03:57 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:23.620 18:03:57 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.620 18:03:57 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.620 --rc genhtml_branch_coverage=1 00:09:23.620 --rc genhtml_function_coverage=1 00:09:23.620 --rc genhtml_legend=1 00:09:23.620 --rc geninfo_all_blocks=1 00:09:23.620 --rc geninfo_unexecuted_blocks=1 00:09:23.620 00:09:23.620 ' 00:09:23.620 18:03:57 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.620 --rc genhtml_branch_coverage=1 00:09:23.620 --rc genhtml_function_coverage=1 00:09:23.620 --rc genhtml_legend=1 00:09:23.620 --rc geninfo_all_blocks=1 00:09:23.620 --rc geninfo_unexecuted_blocks=1 00:09:23.620 00:09:23.620 ' 00:09:23.620 18:03:57 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.620 --rc genhtml_branch_coverage=1 00:09:23.620 --rc genhtml_function_coverage=1 00:09:23.620 --rc genhtml_legend=1 00:09:23.620 --rc geninfo_all_blocks=1 00:09:23.620 --rc geninfo_unexecuted_blocks=1 00:09:23.620 00:09:23.620 ' 00:09:23.620 18:03:57 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.620 --rc genhtml_branch_coverage=1 00:09:23.620 --rc genhtml_function_coverage=1 00:09:23.620 --rc genhtml_legend=1 00:09:23.620 --rc geninfo_all_blocks=1 00:09:23.620 --rc geninfo_unexecuted_blocks=1 00:09:23.620 00:09:23.620 ' 00:09:23.620 18:03:57 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.620 18:03:57 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.620 18:03:57 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.620 18:03:57 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.620 18:03:57 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.620 18:03:57 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:23.620 18:03:57 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:23.620 18:03:57 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:23.620 18:03:57 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.620 18:03:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:23.620 18:03:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:23.620 18:03:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:23.620 18:03:57 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:23.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.620 Waiting for block devices as requested 00:09:23.620 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.620 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.878 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.878 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.170 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:29.170 18:04:03 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:29.170 18:04:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.170 18:04:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:29.170 18:04:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.170 18:04:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.170 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.171 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:29.172 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.173 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:29.174 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.175 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:29.176 18:04:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.176 18:04:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:29.176 18:04:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.176 18:04:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:29.176 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.177 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:29.178 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:29.179 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.180 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.181 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:29.182 18:04:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.182 18:04:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:29.182 18:04:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.182 18:04:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:29.182 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:29.183 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:29.184 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.185 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.186 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:29.187 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.188 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.189 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.190 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.191 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.192 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:29.193 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:29.194 18:04:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:29.194 18:04:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.194 18:04:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:29.195 18:04:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.195 18:04:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.195 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.196 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:29.197 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:29.457 18:04:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.457 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:29.458 18:04:03 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:29.458 18:04:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:29.458 18:04:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:29.458 18:04:03 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.283 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.283 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.283 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.283 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.283 18:04:04 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:30.283 18:04:04 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:30.283 18:04:04 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.283 18:04:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:30.283 ************************************ 00:09:30.283 START TEST nvme_simple_copy 00:09:30.283 ************************************ 00:09:30.283 18:04:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:30.541 Initializing NVMe Controllers 00:09:30.541 Attaching to 0000:00:10.0 00:09:30.541 Controller supports SCC. Attached to 0000:00:10.0 00:09:30.541 Namespace ID: 1 size: 6GB 00:09:30.541 Initialization complete. 00:09:30.541 00:09:30.541 Controller QEMU NVMe Ctrl (12340 ) 00:09:30.541 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:30.541 Namespace Block Size:4096 00:09:30.541 Writing LBAs 0 to 63 with Random Data 00:09:30.541 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:30.541 LBAs matching Written Data: 64 00:09:30.541 00:09:30.541 real 0m0.224s 00:09:30.541 user 0m0.078s 00:09:30.541 sys 0m0.046s 00:09:30.541 18:04:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.541 18:04:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.541 ************************************ 00:09:30.541 END TEST nvme_simple_copy 00:09:30.541 ************************************ 00:09:30.541 00:09:30.541 real 0m7.511s 00:09:30.541 user 0m1.010s 00:09:30.541 sys 0m1.297s 00:09:30.541 18:04:04 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.541 18:04:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:30.541 ************************************ 00:09:30.541 END TEST nvme_scc 00:09:30.541 ************************************ 00:09:30.541 18:04:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:30.541 18:04:04 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:30.541 18:04:04 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:30.541 18:04:04 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:30.541 18:04:04 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:30.541 18:04:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.541 18:04:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.541 18:04:04 -- common/autotest_common.sh@10 -- # set +x 00:09:30.541 ************************************ 00:09:30.541 START TEST nvme_fdp 00:09:30.541 ************************************ 00:09:30.541 18:04:04 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:30.541 * Looking for test storage... 00:09:30.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.541 18:04:04 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.541 18:04:04 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.541 18:04:04 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.801 18:04:04 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:30.801 18:04:04 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.801 18:04:04 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.801 --rc genhtml_branch_coverage=1 00:09:30.801 --rc genhtml_function_coverage=1 00:09:30.801 --rc genhtml_legend=1 00:09:30.801 --rc geninfo_all_blocks=1 00:09:30.801 --rc geninfo_unexecuted_blocks=1 00:09:30.801 00:09:30.801 ' 00:09:30.801 18:04:04 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.801 --rc genhtml_branch_coverage=1 00:09:30.801 --rc genhtml_function_coverage=1 00:09:30.801 --rc genhtml_legend=1 00:09:30.801 --rc geninfo_all_blocks=1 00:09:30.801 --rc geninfo_unexecuted_blocks=1 00:09:30.801 00:09:30.801 ' 00:09:30.801 18:04:04 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.801 --rc genhtml_branch_coverage=1 00:09:30.801 --rc genhtml_function_coverage=1 00:09:30.801 --rc genhtml_legend=1 00:09:30.801 --rc geninfo_all_blocks=1 00:09:30.801 --rc geninfo_unexecuted_blocks=1 00:09:30.801 00:09:30.801 ' 00:09:30.801 18:04:04 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.801 --rc genhtml_branch_coverage=1 00:09:30.801 --rc genhtml_function_coverage=1 00:09:30.801 --rc genhtml_legend=1 00:09:30.801 --rc geninfo_all_blocks=1 00:09:30.801 --rc geninfo_unexecuted_blocks=1 00:09:30.801 00:09:30.801 ' 00:09:30.801 18:04:04 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.801 18:04:04 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.801 18:04:04 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.801 18:04:04 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.801 18:04:04 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.801 18:04:04 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:30.801 18:04:04 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:30.801 18:04:04 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:30.801 18:04:04 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.801 18:04:04 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.059 Waiting for block devices as requested 00:09:31.059 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.320 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.320 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.320 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.633 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.633 18:04:10 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:36.633 18:04:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.633 18:04:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:36.633 18:04:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.633 18:04:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.633 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:36.634 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.635 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.636 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:36.637 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.638 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:36.639 18:04:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.639 18:04:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:36.639 18:04:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.639 18:04:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:36.639 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.640 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:36.641 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.642 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.643 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:36.644 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:36.645 18:04:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.645 18:04:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:36.645 18:04:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.645 18:04:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.645 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.646 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.647 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:36.648 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.649 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:36.650 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.651 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.652 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:36.653 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:36.654 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.655 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:36.656 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:36.657 18:04:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.657 18:04:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:36.657 18:04:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.657 18:04:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.657 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:36.658 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.659 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:36.660 18:04:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:36.660 18:04:10 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.923 18:04:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:36.923 18:04:11 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:36.924 18:04:11 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:36.924 18:04:11 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:36.924 18:04:11 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:36.924 18:04:11 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.759 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.759 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.759 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.759 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.021 18:04:12 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.021 18:04:12 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.021 18:04:12 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.021 18:04:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.021 ************************************ 00:09:38.021 START TEST nvme_flexible_data_placement 00:09:38.021 ************************************ 00:09:38.021 18:04:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.021 Initializing NVMe Controllers 00:09:38.021 Attaching to 0000:00:13.0 00:09:38.021 Controller supports FDP Attached to 0000:00:13.0 00:09:38.021 Namespace ID: 1 Endurance Group ID: 1 00:09:38.021 Initialization complete. 00:09:38.021 00:09:38.021 ================================== 00:09:38.021 == FDP tests for Namespace: #01 == 00:09:38.021 ================================== 00:09:38.021 00:09:38.021 Get Feature: FDP: 00:09:38.021 ================= 00:09:38.021 Enabled: Yes 00:09:38.021 FDP configuration Index: 0 00:09:38.021 00:09:38.021 FDP configurations log page 00:09:38.021 =========================== 00:09:38.021 Number of FDP configurations: 1 00:09:38.021 Version: 0 00:09:38.021 Size: 112 00:09:38.021 FDP Configuration Descriptor: 0 00:09:38.021 Descriptor Size: 96 00:09:38.021 Reclaim Group Identifier format: 2 00:09:38.021 FDP Volatile Write Cache: Not Present 00:09:38.021 FDP Configuration: Valid 00:09:38.021 Vendor Specific Size: 0 00:09:38.021 Number of Reclaim Groups: 2 00:09:38.021 Number of Recalim Unit Handles: 8 00:09:38.021 Max Placement Identifiers: 128 00:09:38.021 Number of Namespaces Suppprted: 256 00:09:38.021 Reclaim unit Nominal Size: 6000000 bytes 00:09:38.021 Estimated Reclaim Unit Time Limit: Not Reported 00:09:38.021 RUH Desc #000: RUH Type: Initially Isolated 00:09:38.021 RUH Desc #001: RUH Type: Initially Isolated 00:09:38.021 RUH Desc #002: RUH Type: Initially Isolated 00:09:38.021 RUH Desc #003: RUH Type: Initially Isolated 00:09:38.021 RUH Desc #004: RUH Type: Initially Isolated 00:09:38.021 RUH Desc #005: RUH Type: Initially Isolated 00:09:38.021 RUH Desc #006: RUH Type: Initially Isolated 00:09:38.021 RUH Desc #007: RUH Type: Initially Isolated 00:09:38.021 00:09:38.021 FDP reclaim unit handle usage log page 00:09:38.021 ====================================== 00:09:38.021 Number of Reclaim Unit Handles: 8 00:09:38.021 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:38.021 RUH Usage Desc #001: RUH Attributes: Unused 00:09:38.021 RUH Usage Desc #002: RUH Attributes: Unused 00:09:38.021 RUH Usage Desc #003: RUH Attributes: Unused 00:09:38.021 RUH Usage Desc #004: RUH Attributes: Unused 00:09:38.021 RUH Usage Desc #005: RUH Attributes: Unused 00:09:38.021 RUH Usage Desc #006: RUH Attributes: Unused 00:09:38.021 RUH Usage Desc #007: RUH Attributes: Unused 00:09:38.021 00:09:38.021 FDP statistics log page 00:09:38.021 ======================= 00:09:38.021 Host bytes with metadata written: 2198245376 00:09:38.021 Media bytes with metadata written: 2198581248 00:09:38.021 Media bytes erased: 0 00:09:38.021 00:09:38.021 FDP Reclaim unit handle status 00:09:38.021 ============================== 00:09:38.021 Number of RUHS descriptors: 2 00:09:38.021 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000f97 00:09:38.021 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:38.021 00:09:38.021 FDP write on placement id: 0 success 00:09:38.021 00:09:38.021 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:38.021 00:09:38.021 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:38.021 00:09:38.021 Get Feature: FDP Events for Placement handle: #0 00:09:38.021 ======================== 00:09:38.021 Number of FDP Events: 6 00:09:38.021 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:38.021 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:38.021 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:38.021 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:38.021 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:38.021 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:38.021 00:09:38.021 FDP events log page 00:09:38.021 =================== 00:09:38.021 Number of FDP events: 1 00:09:38.021 FDP Event #0: 00:09:38.021 Event Type: RU Not Written to Capacity 00:09:38.021 Placement Identifier: Valid 00:09:38.021 NSID: Valid 00:09:38.021 Location: Valid 00:09:38.021 Placement Identifier: 0 00:09:38.021 Event Timestamp: 3 00:09:38.021 Namespace Identifier: 1 00:09:38.021 Reclaim Group Identifier: 0 00:09:38.021 Reclaim Unit Handle Identifier: 0 00:09:38.021 00:09:38.021 FDP test passed 00:09:38.021 00:09:38.021 real 0m0.205s 00:09:38.021 user 0m0.050s 00:09:38.021 sys 0m0.055s 00:09:38.021 18:04:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.021 18:04:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:38.021 ************************************ 00:09:38.021 END TEST nvme_flexible_data_placement 00:09:38.021 ************************************ 00:09:38.283 00:09:38.283 real 0m7.578s 00:09:38.283 user 0m1.063s 00:09:38.283 sys 0m1.367s 00:09:38.283 18:04:12 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.283 18:04:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.283 ************************************ 00:09:38.283 END TEST nvme_fdp 00:09:38.283 ************************************ 00:09:38.283 18:04:12 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:38.283 18:04:12 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.283 18:04:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.283 18:04:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.283 18:04:12 -- common/autotest_common.sh@10 -- # set +x 00:09:38.283 ************************************ 00:09:38.283 START TEST nvme_rpc 00:09:38.283 ************************************ 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.283 * Looking for test storage... 00:09:38.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.283 18:04:12 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.283 --rc genhtml_branch_coverage=1 00:09:38.283 --rc genhtml_function_coverage=1 00:09:38.283 --rc genhtml_legend=1 00:09:38.283 --rc geninfo_all_blocks=1 00:09:38.283 --rc geninfo_unexecuted_blocks=1 00:09:38.283 00:09:38.283 ' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.283 --rc genhtml_branch_coverage=1 00:09:38.283 --rc genhtml_function_coverage=1 00:09:38.283 --rc genhtml_legend=1 00:09:38.283 --rc geninfo_all_blocks=1 00:09:38.283 --rc geninfo_unexecuted_blocks=1 00:09:38.283 00:09:38.283 ' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.283 --rc genhtml_branch_coverage=1 00:09:38.283 --rc genhtml_function_coverage=1 00:09:38.283 --rc genhtml_legend=1 00:09:38.283 --rc geninfo_all_blocks=1 00:09:38.283 --rc geninfo_unexecuted_blocks=1 00:09:38.283 00:09:38.283 ' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.283 --rc genhtml_branch_coverage=1 00:09:38.283 --rc genhtml_function_coverage=1 00:09:38.283 --rc genhtml_legend=1 00:09:38.283 --rc geninfo_all_blocks=1 00:09:38.283 --rc geninfo_unexecuted_blocks=1 00:09:38.283 00:09:38.283 ' 00:09:38.283 18:04:12 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.283 18:04:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:38.283 18:04:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:38.283 18:04:12 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78981 00:09:38.283 18:04:12 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:38.283 18:04:12 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:38.283 18:04:12 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78981 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 78981 ']' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.283 18:04:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.544 [2024-12-13 18:04:12.716560] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:38.544 [2024-12-13 18:04:12.716682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78981 ] 00:09:38.544 [2024-12-13 18:04:12.863610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.544 [2024-12-13 18:04:12.882783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.544 [2024-12-13 18:04:12.882883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.491 18:04:13 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.491 18:04:13 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:39.491 18:04:13 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:39.491 Nvme0n1 00:09:39.491 18:04:13 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:39.491 18:04:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:39.763 request: 00:09:39.763 { 00:09:39.763 "bdev_name": "Nvme0n1", 00:09:39.763 "filename": "non_existing_file", 00:09:39.763 "method": "bdev_nvme_apply_firmware", 00:09:39.763 "req_id": 1 00:09:39.763 } 00:09:39.763 Got JSON-RPC error response 00:09:39.763 response: 00:09:39.763 { 00:09:39.763 "code": -32603, 00:09:39.763 "message": "open file failed." 00:09:39.763 } 00:09:39.763 18:04:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:39.763 18:04:13 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:39.763 18:04:13 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:40.031 18:04:14 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:40.031 18:04:14 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78981 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 78981 ']' 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 78981 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78981 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.031 killing process with pid 78981 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78981' 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@973 -- # kill 78981 00:09:40.031 18:04:14 nvme_rpc -- common/autotest_common.sh@978 -- # wait 78981 00:09:40.290 00:09:40.290 real 0m2.022s 00:09:40.290 user 0m3.986s 00:09:40.290 sys 0m0.459s 00:09:40.290 18:04:14 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.290 18:04:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 ************************************ 00:09:40.290 END TEST nvme_rpc 00:09:40.290 ************************************ 00:09:40.290 18:04:14 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:40.290 18:04:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.290 18:04:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.290 18:04:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 ************************************ 00:09:40.290 START TEST nvme_rpc_timeouts 00:09:40.290 ************************************ 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:40.290 * Looking for test storage... 00:09:40.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:40.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.290 18:04:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.290 --rc genhtml_branch_coverage=1 00:09:40.290 --rc genhtml_function_coverage=1 00:09:40.290 --rc genhtml_legend=1 00:09:40.290 --rc geninfo_all_blocks=1 00:09:40.290 --rc geninfo_unexecuted_blocks=1 00:09:40.290 00:09:40.290 ' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.290 --rc genhtml_branch_coverage=1 00:09:40.290 --rc genhtml_function_coverage=1 00:09:40.290 --rc genhtml_legend=1 00:09:40.290 --rc geninfo_all_blocks=1 00:09:40.290 --rc geninfo_unexecuted_blocks=1 00:09:40.290 00:09:40.290 ' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.290 --rc genhtml_branch_coverage=1 00:09:40.290 --rc genhtml_function_coverage=1 00:09:40.290 --rc genhtml_legend=1 00:09:40.290 --rc geninfo_all_blocks=1 00:09:40.290 --rc geninfo_unexecuted_blocks=1 00:09:40.290 00:09:40.290 ' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.290 --rc genhtml_branch_coverage=1 00:09:40.290 --rc genhtml_function_coverage=1 00:09:40.290 --rc genhtml_legend=1 00:09:40.290 --rc geninfo_all_blocks=1 00:09:40.290 --rc geninfo_unexecuted_blocks=1 00:09:40.290 00:09:40.290 ' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.290 18:04:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79035 00:09:40.290 18:04:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79035 00:09:40.290 18:04:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79067 00:09:40.290 18:04:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:40.290 18:04:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79067 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79067 ']' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.290 18:04:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 18:04:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:40.548 [2024-12-13 18:04:14.723418] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:40.548 [2024-12-13 18:04:14.723955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79067 ] 00:09:40.548 [2024-12-13 18:04:14.867708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.548 [2024-12-13 18:04:14.887239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.548 [2024-12-13 18:04:14.887319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.480 Checking default timeout settings: 00:09:41.480 18:04:15 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.480 18:04:15 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:41.480 18:04:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:41.480 18:04:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:41.738 Making settings changes with rpc: 00:09:41.738 18:04:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:41.738 18:04:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:41.738 Check default vs. modified settings: 00:09:41.738 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:41.738 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79035 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79035 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.304 Setting action_on_timeout is changed as expected. 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79035 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79035 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.304 Setting timeout_us is changed as expected. 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79035 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79035 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.304 Setting timeout_admin_us is changed as expected. 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79035 /tmp/settings_modified_79035 00:09:42.304 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79067 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79067 ']' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79067 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79067 00:09:42.304 killing process with pid 79067 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79067' 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79067 00:09:42.304 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79067 00:09:42.562 RPC TIMEOUT SETTING TEST PASSED. 00:09:42.562 18:04:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:42.562 ************************************ 00:09:42.562 END TEST nvme_rpc_timeouts 00:09:42.562 ************************************ 00:09:42.562 00:09:42.562 real 0m2.182s 00:09:42.562 user 0m4.432s 00:09:42.562 sys 0m0.420s 00:09:42.562 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.562 18:04:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:42.562 18:04:16 -- spdk/autotest.sh@239 -- # uname -s 00:09:42.562 18:04:16 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:42.562 18:04:16 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:42.562 18:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.562 18:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.562 18:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:42.562 ************************************ 00:09:42.562 START TEST sw_hotplug 00:09:42.562 ************************************ 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:42.562 * Looking for test storage... 00:09:42.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.562 18:04:16 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.562 --rc genhtml_branch_coverage=1 00:09:42.562 --rc genhtml_function_coverage=1 00:09:42.562 --rc genhtml_legend=1 00:09:42.562 --rc geninfo_all_blocks=1 00:09:42.562 --rc geninfo_unexecuted_blocks=1 00:09:42.562 00:09:42.562 ' 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.562 --rc genhtml_branch_coverage=1 00:09:42.562 --rc genhtml_function_coverage=1 00:09:42.562 --rc genhtml_legend=1 00:09:42.562 --rc geninfo_all_blocks=1 00:09:42.562 --rc geninfo_unexecuted_blocks=1 00:09:42.562 00:09:42.562 ' 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.562 --rc genhtml_branch_coverage=1 00:09:42.562 --rc genhtml_function_coverage=1 00:09:42.562 --rc genhtml_legend=1 00:09:42.562 --rc geninfo_all_blocks=1 00:09:42.562 --rc geninfo_unexecuted_blocks=1 00:09:42.562 00:09:42.562 ' 00:09:42.562 18:04:16 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.562 --rc genhtml_branch_coverage=1 00:09:42.562 --rc genhtml_function_coverage=1 00:09:42.562 --rc genhtml_legend=1 00:09:42.562 --rc geninfo_all_blocks=1 00:09:42.562 --rc geninfo_unexecuted_blocks=1 00:09:42.562 00:09:42.562 ' 00:09:42.562 18:04:16 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:42.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.078 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.078 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.078 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.078 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.078 18:04:17 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:43.078 18:04:17 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:43.078 18:04:17 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:43.078 18:04:17 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:43.078 18:04:17 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:43.079 18:04:17 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:43.079 18:04:17 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:43.079 18:04:17 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:43.079 18:04:17 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:43.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.595 Waiting for block devices as requested 00:09:43.595 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.595 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.595 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.595 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.860 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:48.860 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:48.860 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.118 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:49.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.118 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:49.376 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:49.634 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.634 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:49.634 18:04:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79908 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:49.634 18:04:23 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:49.634 18:04:23 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:49.634 18:04:23 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:49.634 18:04:23 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:49.634 18:04:23 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:49.634 18:04:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:49.892 Initializing NVMe Controllers 00:09:49.892 Attaching to 0000:00:10.0 00:09:49.892 Attaching to 0000:00:11.0 00:09:49.892 Attached to 0000:00:11.0 00:09:49.892 Attached to 0000:00:10.0 00:09:49.892 Initialization complete. Starting I/O... 00:09:49.892 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:49.892 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:49.892 00:09:50.831 QEMU NVMe Ctrl (12341 ): 3006 I/Os completed (+3006) 00:09:50.831 QEMU NVMe Ctrl (12340 ): 3021 I/Os completed (+3021) 00:09:50.831 00:09:52.205 QEMU NVMe Ctrl (12341 ): 6726 I/Os completed (+3720) 00:09:52.205 QEMU NVMe Ctrl (12340 ): 6763 I/Os completed (+3742) 00:09:52.205 00:09:53.139 QEMU NVMe Ctrl (12341 ): 10417 I/Os completed (+3691) 00:09:53.139 QEMU NVMe Ctrl (12340 ): 10420 I/Os completed (+3657) 00:09:53.139 00:09:54.072 QEMU NVMe Ctrl (12341 ): 14279 I/Os completed (+3862) 00:09:54.072 QEMU NVMe Ctrl (12340 ): 14237 I/Os completed (+3817) 00:09:54.072 00:09:55.013 QEMU NVMe Ctrl (12341 ): 18405 I/Os completed (+4126) 00:09:55.013 QEMU NVMe Ctrl (12340 ): 18340 I/Os completed (+4103) 00:09:55.013 00:09:55.950 18:04:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:55.950 18:04:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.950 18:04:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.950 [2024-12-13 18:04:29.986276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:55.950 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:55.950 [2024-12-13 18:04:29.987418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:29.987469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:29.987485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:29.987511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:55.950 [2024-12-13 18:04:29.988628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:29.988666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:29.988679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:29.988693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.950 [2024-12-13 18:04:30.009149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:55.950 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:55.950 [2024-12-13 18:04:30.010137] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:30.010201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:30.010235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:30.010302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:55.950 [2024-12-13 18:04:30.011399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:30.011528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:30.011563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 [2024-12-13 18:04:30.011620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:55.950 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:55.950 EAL: Scan for (pci) bus failed. 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:55.950 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:55.950 Attaching to 0000:00:10.0 00:09:55.950 Attached to 0000:00:10.0 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.950 18:04:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:55.950 Attaching to 0000:00:11.0 00:09:55.950 Attached to 0000:00:11.0 00:09:56.883 QEMU NVMe Ctrl (12340 ): 3599 I/Os completed (+3599) 00:09:56.883 QEMU NVMe Ctrl (12341 ): 3358 I/Os completed (+3358) 00:09:56.883 00:09:57.814 QEMU NVMe Ctrl (12340 ): 7289 I/Os completed (+3690) 00:09:57.814 QEMU NVMe Ctrl (12341 ): 7105 I/Os completed (+3747) 00:09:57.814 00:09:59.187 QEMU NVMe Ctrl (12340 ): 10922 I/Os completed (+3633) 00:09:59.187 QEMU NVMe Ctrl (12341 ): 10761 I/Os completed (+3656) 00:09:59.187 00:10:00.121 QEMU NVMe Ctrl (12340 ): 14900 I/Os completed (+3978) 00:10:00.121 QEMU NVMe Ctrl (12341 ): 14845 I/Os completed (+4084) 00:10:00.121 00:10:01.054 QEMU NVMe Ctrl (12340 ): 19214 I/Os completed (+4314) 00:10:01.054 QEMU NVMe Ctrl (12341 ): 19152 I/Os completed (+4307) 00:10:01.054 00:10:01.987 QEMU NVMe Ctrl (12340 ): 23174 I/Os completed (+3960) 00:10:01.987 QEMU NVMe Ctrl (12341 ): 23089 I/Os completed (+3937) 00:10:01.987 00:10:02.921 QEMU NVMe Ctrl (12340 ): 26938 I/Os completed (+3764) 00:10:02.922 QEMU NVMe Ctrl (12341 ): 26853 I/Os completed (+3764) 00:10:02.922 00:10:03.867 QEMU NVMe Ctrl (12340 ): 30543 I/Os completed (+3605) 00:10:03.867 QEMU NVMe Ctrl (12341 ): 30462 I/Os completed (+3609) 00:10:03.867 00:10:04.811 QEMU NVMe Ctrl (12340 ): 33692 I/Os completed (+3149) 00:10:04.811 QEMU NVMe Ctrl (12341 ): 33613 I/Os completed (+3151) 00:10:04.811 00:10:06.195 QEMU NVMe Ctrl (12340 ): 36937 I/Os completed (+3245) 00:10:06.195 QEMU NVMe Ctrl (12341 ): 36927 I/Os completed (+3314) 00:10:06.195 00:10:07.139 QEMU NVMe Ctrl (12340 ): 40443 I/Os completed (+3506) 00:10:07.139 QEMU NVMe Ctrl (12341 ): 40447 I/Os completed (+3520) 00:10:07.139 00:10:08.080 QEMU NVMe Ctrl (12340 ): 44862 I/Os completed (+4419) 00:10:08.080 QEMU NVMe Ctrl (12341 ): 44840 I/Os completed (+4393) 00:10:08.080 00:10:08.080 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:08.080 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.081 [2024-12-13 18:04:42.246770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:08.081 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:08.081 [2024-12-13 18:04:42.247622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.247723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.247751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.247816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:08.081 [2024-12-13 18:04:42.248914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.249002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.249028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.249080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.081 [2024-12-13 18:04:42.265623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:08.081 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:08.081 [2024-12-13 18:04:42.266413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.266503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.266520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.266533] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:08.081 [2024-12-13 18:04:42.267340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.267366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.267379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 [2024-12-13 18:04:42.267389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.081 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:08.081 EAL: Scan for (pci) bus failed. 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:08.081 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:08.081 Attaching to 0000:00:10.0 00:10:08.081 Attached to 0000:00:10.0 00:10:08.339 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:08.339 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.339 18:04:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:08.339 Attaching to 0000:00:11.0 00:10:08.339 Attached to 0000:00:11.0 00:10:08.906 QEMU NVMe Ctrl (12340 ): 3233 I/Os completed (+3233) 00:10:08.906 QEMU NVMe Ctrl (12341 ): 2794 I/Os completed (+2794) 00:10:08.906 00:10:09.841 QEMU NVMe Ctrl (12340 ): 7577 I/Os completed (+4344) 00:10:09.841 QEMU NVMe Ctrl (12341 ): 7106 I/Os completed (+4312) 00:10:09.841 00:10:11.214 QEMU NVMe Ctrl (12340 ): 11906 I/Os completed (+4329) 00:10:11.214 QEMU NVMe Ctrl (12341 ): 11433 I/Os completed (+4327) 00:10:11.214 00:10:11.780 QEMU NVMe Ctrl (12340 ): 16229 I/Os completed (+4323) 00:10:11.780 QEMU NVMe Ctrl (12341 ): 15736 I/Os completed (+4303) 00:10:11.780 00:10:13.151 QEMU NVMe Ctrl (12340 ): 20587 I/Os completed (+4358) 00:10:13.151 QEMU NVMe Ctrl (12341 ): 20061 I/Os completed (+4325) 00:10:13.151 00:10:14.084 QEMU NVMe Ctrl (12340 ): 24511 I/Os completed (+3924) 00:10:14.084 QEMU NVMe Ctrl (12341 ): 23968 I/Os completed (+3907) 00:10:14.084 00:10:15.019 QEMU NVMe Ctrl (12340 ): 28439 I/Os completed (+3928) 00:10:15.019 QEMU NVMe Ctrl (12341 ): 27892 I/Os completed (+3924) 00:10:15.019 00:10:15.953 QEMU NVMe Ctrl (12340 ): 32562 I/Os completed (+4123) 00:10:15.953 QEMU NVMe Ctrl (12341 ): 31999 I/Os completed (+4107) 00:10:15.953 00:10:16.886 QEMU NVMe Ctrl (12340 ): 36366 I/Os completed (+3804) 00:10:16.886 QEMU NVMe Ctrl (12341 ): 35812 I/Os completed (+3813) 00:10:16.886 00:10:17.821 QEMU NVMe Ctrl (12340 ): 40134 I/Os completed (+3768) 00:10:17.821 QEMU NVMe Ctrl (12341 ): 39588 I/Os completed (+3776) 00:10:17.821 00:10:19.195 QEMU NVMe Ctrl (12340 ): 44335 I/Os completed (+4201) 00:10:19.195 QEMU NVMe Ctrl (12341 ): 43750 I/Os completed (+4162) 00:10:19.195 00:10:20.127 QEMU NVMe Ctrl (12340 ): 48514 I/Os completed (+4179) 00:10:20.127 QEMU NVMe Ctrl (12341 ): 48053 I/Os completed (+4303) 00:10:20.127 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.385 [2024-12-13 18:04:54.512068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:20.385 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:20.385 [2024-12-13 18:04:54.513213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.513353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.513391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.513463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:20.385 [2024-12-13 18:04:54.514766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.514862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.514893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.514947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.385 [2024-12-13 18:04:54.530981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:20.385 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:20.385 [2024-12-13 18:04:54.531966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.532066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.532128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.532156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:20.385 [2024-12-13 18:04:54.533306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.533396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.533454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 [2024-12-13 18:04:54.533482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:20.385 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:20.385 EAL: Scan for (pci) bus failed. 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:20.385 Attaching to 0000:00:10.0 00:10:20.385 Attached to 0000:00:10.0 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.385 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:20.385 Attaching to 0000:00:11.0 00:10:20.385 Attached to 0000:00:11.0 00:10:20.385 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:20.385 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:20.385 [2024-12-13 18:04:54.745593] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:32.582 18:05:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:32.582 18:05:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:32.582 18:05:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.76 00:10:32.582 18:05:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.76 00:10:32.582 18:05:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:32.582 18:05:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.76 00:10:32.582 18:05:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.76 2 00:10:32.582 remove_attach_helper took 42.76s to complete (handling 2 nvme drive(s)) 18:05:06 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79908 00:10:39.178 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79908) - No such process 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79908 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:39.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80458 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80458 00:10:39.178 18:05:12 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80458 ']' 00:10:39.178 18:05:12 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.178 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:39.178 18:05:12 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.178 18:05:12 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.178 18:05:12 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.178 18:05:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.178 [2024-12-13 18:05:12.834238] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:39.178 [2024-12-13 18:05:12.834653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80458 ] 00:10:39.178 [2024-12-13 18:05:12.979202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.178 [2024-12-13 18:05:13.008516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:39.437 18:05:13 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:39.437 18:05:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.998 18:05:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.998 18:05:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.998 18:05:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:45.998 18:05:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:45.998 [2024-12-13 18:05:19.779226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:45.998 [2024-12-13 18:05:19.780298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.998 [2024-12-13 18:05:19.780331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.998 [2024-12-13 18:05:19.780343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.998 [2024-12-13 18:05:19.780356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.998 [2024-12-13 18:05:19.780364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.998 [2024-12-13 18:05:19.780371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.998 [2024-12-13 18:05:19.780381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.998 [2024-12-13 18:05:19.780387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.998 [2024-12-13 18:05:19.780394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.998 [2024-12-13 18:05:19.780400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.998 [2024-12-13 18:05:19.780408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.998 [2024-12-13 18:05:19.780414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.998 [2024-12-13 18:05:20.179222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:45.998 [2024-12-13 18:05:20.180272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.999 [2024-12-13 18:05:20.180394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.999 [2024-12-13 18:05:20.180408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.999 [2024-12-13 18:05:20.180420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.999 [2024-12-13 18:05:20.180427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.999 [2024-12-13 18:05:20.180436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.999 [2024-12-13 18:05:20.180442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.999 [2024-12-13 18:05:20.180449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.999 [2024-12-13 18:05:20.180456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.999 [2024-12-13 18:05:20.180467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.999 [2024-12-13 18:05:20.180473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.999 [2024-12-13 18:05:20.180481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.999 18:05:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.999 18:05:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.999 18:05:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:45.999 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.257 18:05:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:58.479 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:58.479 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:58.479 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.480 18:05:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 18:05:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 18:05:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.480 [2024-12-13 18:05:32.579390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:58.480 [2024-12-13 18:05:32.580712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.480 [2024-12-13 18:05:32.580816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.480 [2024-12-13 18:05:32.580877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.480 [2024-12-13 18:05:32.580934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.480 [2024-12-13 18:05:32.580956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.480 [2024-12-13 18:05:32.581005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.480 [2024-12-13 18:05:32.581031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.480 [2024-12-13 18:05:32.581047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.480 [2024-12-13 18:05:32.581071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.480 [2024-12-13 18:05:32.581121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.480 [2024-12-13 18:05:32.581140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.480 [2024-12-13 18:05:32.581163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.480 18:05:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.480 18:05:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.480 18:05:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:58.480 18:05:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:58.738 [2024-12-13 18:05:32.979398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.738 [2024-12-13 18:05:32.980435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.738 [2024-12-13 18:05:32.980466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.738 [2024-12-13 18:05:32.980476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.738 [2024-12-13 18:05:32.980486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.738 [2024-12-13 18:05:32.980494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.738 [2024-12-13 18:05:32.980502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.738 [2024-12-13 18:05:32.980509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.738 [2024-12-13 18:05:32.980516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.738 [2024-12-13 18:05:32.980523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.738 [2024-12-13 18:05:32.980530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.738 [2024-12-13 18:05:32.980537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.738 [2024-12-13 18:05:32.980546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.996 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:58.996 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.996 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.997 18:05:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.997 18:05:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.997 18:05:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.997 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:59.256 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:59.256 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.256 18:05:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.520 18:05:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.520 18:05:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 18:05:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.520 18:05:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.520 18:05:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.520 [2024-12-13 18:05:45.479583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:11.520 [2024-12-13 18:05:45.480648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.480755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.520 [2024-12-13 18:05:45.480774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.520 [2024-12-13 18:05:45.480787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.480795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.520 [2024-12-13 18:05:45.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.520 [2024-12-13 18:05:45.480810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.480816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.520 [2024-12-13 18:05:45.480825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.520 [2024-12-13 18:05:45.480832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.480840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.520 [2024-12-13 18:05:45.480846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.520 18:05:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:11.520 18:05:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:11.520 [2024-12-13 18:05:45.879585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:11.520 [2024-12-13 18:05:45.880701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.880730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.520 [2024-12-13 18:05:45.880739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.520 [2024-12-13 18:05:45.880750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.880756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.520 [2024-12-13 18:05:45.880765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.520 [2024-12-13 18:05:45.880772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.880779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.520 [2024-12-13 18:05:45.880785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.520 [2024-12-13 18:05:45.880794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.520 [2024-12-13 18:05:45.880800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.521 [2024-12-13 18:05:45.880807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.781 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:11.781 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.781 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.781 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.781 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.781 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.782 18:05:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.782 18:05:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.782 18:05:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.782 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:11.782 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.782 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.782 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.782 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.043 18:05:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.292 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:24.292 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:24.293 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:24.293 18:05:58 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:24.293 18:05:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.880 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.880 18:06:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.881 18:06:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.881 18:06:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:30.881 [2024-12-13 18:06:04.452762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:30.881 [2024-12-13 18:06:04.453631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.453726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.453792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 [2024-12-13 18:06:04.453824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.453933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.453976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 [2024-12-13 18:06:04.454002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.454018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.454044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 [2024-12-13 18:06:04.454067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.454083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.454147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 [2024-12-13 18:06:04.852768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:30.881 [2024-12-13 18:06:04.853605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.853708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.853771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 [2024-12-13 18:06:04.853826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.853846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.853874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 [2024-12-13 18:06:04.853960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.853976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.854040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 [2024-12-13 18:06:04.854095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.881 [2024-12-13 18:06:04.854114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.881 [2024-12-13 18:06:04.854174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.881 18:06:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.881 18:06:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.881 18:06:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:30.881 18:06:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.881 18:06:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.182 18:06:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.182 18:06:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.182 18:06:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.182 [2024-12-13 18:06:17.252950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:43.182 [2024-12-13 18:06:17.255146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.182 [2024-12-13 18:06:17.255275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.182 [2024-12-13 18:06:17.255345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.182 [2024-12-13 18:06:17.255458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.182 [2024-12-13 18:06:17.255479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.182 [2024-12-13 18:06:17.255503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.182 [2024-12-13 18:06:17.255527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.182 [2024-12-13 18:06:17.255542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.182 [2024-12-13 18:06:17.255599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.182 [2024-12-13 18:06:17.255623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.182 [2024-12-13 18:06:17.255641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.182 [2024-12-13 18:06:17.255664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.182 18:06:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.182 18:06:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.182 18:06:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:43.182 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:43.443 [2024-12-13 18:06:17.752954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:43.443 [2024-12-13 18:06:17.753774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.443 [2024-12-13 18:06:17.753807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.443 [2024-12-13 18:06:17.753818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.443 [2024-12-13 18:06:17.753830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.443 [2024-12-13 18:06:17.753838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.443 [2024-12-13 18:06:17.753849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.443 [2024-12-13 18:06:17.753857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.443 [2024-12-13 18:06:17.753865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.443 [2024-12-13 18:06:17.753872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.443 [2024-12-13 18:06:17.753881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.443 [2024-12-13 18:06:17.753888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.443 [2024-12-13 18:06:17.753896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.704 18:06:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.704 18:06:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.704 18:06:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.704 18:06:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:43.704 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:43.704 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.704 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.704 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.704 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:43.966 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.966 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.966 18:06:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.199 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.199 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.199 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.199 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.199 18:06:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.199 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.199 18:06:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.199 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.200 18:06:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.200 [2024-12-13 18:06:30.153136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:56.200 [2024-12-13 18:06:30.154395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.154497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.154560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.200 [2024-12-13 18:06:30.154681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.154704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.154755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.200 [2024-12-13 18:06:30.154782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.154816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.154872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.200 [2024-12-13 18:06:30.154913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.154932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.154954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.200 18:06:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.200 18:06:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.200 18:06:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:56.200 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:56.200 [2024-12-13 18:06:30.553140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:56.200 [2024-12-13 18:06:30.554097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.554129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.554140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.200 [2024-12-13 18:06:30.554151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.554158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.554166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.200 [2024-12-13 18:06:30.554172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.554181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.554188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.200 [2024-12-13 18:06:30.554196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.200 [2024-12-13 18:06:30.554203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.200 [2024-12-13 18:06:30.554210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.461 18:06:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.461 18:06:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.461 18:06:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:56.461 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.462 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.462 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.462 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.723 18:06:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.954 18:06:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.954 18:06:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.954 18:06:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.954 18:06:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.954 18:06:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.954 18:06:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.954 18:06:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.954 18:06:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.954 18:06:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.66 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.66 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:08.954 18:06:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.66 00:12:08.954 18:06:43 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.66 2 00:12:08.954 remove_attach_helper took 44.66s to complete (handling 2 nvme drive(s)) 18:06:43 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:08.954 18:06:43 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80458 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80458 ']' 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80458 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80458 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80458' 00:12:08.954 killing process with pid 80458 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80458 00:12:08.954 18:06:43 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80458 00:12:08.954 18:06:43 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:09.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.784 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.784 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.784 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.046 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.046 00:12:10.046 real 2m27.473s 00:12:10.046 user 1m47.233s 00:12:10.046 sys 0m18.855s 00:12:10.046 18:06:44 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.046 18:06:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.046 ************************************ 00:12:10.046 END TEST sw_hotplug 00:12:10.046 ************************************ 00:12:10.046 18:06:44 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:10.046 18:06:44 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:10.046 18:06:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.046 18:06:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.046 18:06:44 -- common/autotest_common.sh@10 -- # set +x 00:12:10.046 ************************************ 00:12:10.046 START TEST nvme_xnvme 00:12:10.046 ************************************ 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:10.046 * Looking for test storage... 00:12:10.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.046 18:06:44 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:10.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.046 --rc genhtml_branch_coverage=1 00:12:10.046 --rc genhtml_function_coverage=1 00:12:10.046 --rc genhtml_legend=1 00:12:10.046 --rc geninfo_all_blocks=1 00:12:10.046 --rc geninfo_unexecuted_blocks=1 00:12:10.046 00:12:10.046 ' 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:10.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.046 --rc genhtml_branch_coverage=1 00:12:10.046 --rc genhtml_function_coverage=1 00:12:10.046 --rc genhtml_legend=1 00:12:10.046 --rc geninfo_all_blocks=1 00:12:10.046 --rc geninfo_unexecuted_blocks=1 00:12:10.046 00:12:10.046 ' 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:10.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.046 --rc genhtml_branch_coverage=1 00:12:10.046 --rc genhtml_function_coverage=1 00:12:10.046 --rc genhtml_legend=1 00:12:10.046 --rc geninfo_all_blocks=1 00:12:10.046 --rc geninfo_unexecuted_blocks=1 00:12:10.046 00:12:10.046 ' 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:10.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.046 --rc genhtml_branch_coverage=1 00:12:10.046 --rc genhtml_function_coverage=1 00:12:10.046 --rc genhtml_legend=1 00:12:10.046 --rc geninfo_all_blocks=1 00:12:10.046 --rc geninfo_unexecuted_blocks=1 00:12:10.046 00:12:10.046 ' 00:12:10.046 18:06:44 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:10.046 18:06:44 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:10.046 18:06:44 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:10.046 18:06:44 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:10.047 18:06:44 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:10.047 18:06:44 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:10.047 18:06:44 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:10.310 #define SPDK_CONFIG_H 00:12:10.310 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:10.310 #define SPDK_CONFIG_APPS 1 00:12:10.310 #define SPDK_CONFIG_ARCH native 00:12:10.310 #define SPDK_CONFIG_ASAN 1 00:12:10.310 #undef SPDK_CONFIG_AVAHI 00:12:10.310 #undef SPDK_CONFIG_CET 00:12:10.310 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:10.310 #define SPDK_CONFIG_COVERAGE 1 00:12:10.310 #define SPDK_CONFIG_CROSS_PREFIX 00:12:10.310 #undef SPDK_CONFIG_CRYPTO 00:12:10.310 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:10.310 #undef SPDK_CONFIG_CUSTOMOCF 00:12:10.310 #undef SPDK_CONFIG_DAOS 00:12:10.310 #define SPDK_CONFIG_DAOS_DIR 00:12:10.310 #define SPDK_CONFIG_DEBUG 1 00:12:10.310 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:10.310 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:10.310 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:10.310 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:10.310 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:10.310 #undef SPDK_CONFIG_DPDK_UADK 00:12:10.310 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:10.310 #define SPDK_CONFIG_EXAMPLES 1 00:12:10.310 #undef SPDK_CONFIG_FC 00:12:10.310 #define SPDK_CONFIG_FC_PATH 00:12:10.310 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:10.310 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:10.310 #define SPDK_CONFIG_FSDEV 1 00:12:10.310 #undef SPDK_CONFIG_FUSE 00:12:10.310 #undef SPDK_CONFIG_FUZZER 00:12:10.310 #define SPDK_CONFIG_FUZZER_LIB 00:12:10.310 #undef SPDK_CONFIG_GOLANG 00:12:10.310 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:10.310 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:10.310 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:10.310 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:10.310 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:10.310 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:10.310 #undef SPDK_CONFIG_HAVE_LZ4 00:12:10.310 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:10.310 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:10.310 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:10.310 #define SPDK_CONFIG_IDXD 1 00:12:10.310 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:10.310 #undef SPDK_CONFIG_IPSEC_MB 00:12:10.310 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:10.310 #define SPDK_CONFIG_ISAL 1 00:12:10.310 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:10.310 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:10.310 #define SPDK_CONFIG_LIBDIR 00:12:10.310 #undef SPDK_CONFIG_LTO 00:12:10.310 #define SPDK_CONFIG_MAX_LCORES 128 00:12:10.310 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:10.310 #define SPDK_CONFIG_NVME_CUSE 1 00:12:10.310 #undef SPDK_CONFIG_OCF 00:12:10.310 #define SPDK_CONFIG_OCF_PATH 00:12:10.310 #define SPDK_CONFIG_OPENSSL_PATH 00:12:10.310 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:10.310 #define SPDK_CONFIG_PGO_DIR 00:12:10.310 #undef SPDK_CONFIG_PGO_USE 00:12:10.310 #define SPDK_CONFIG_PREFIX /usr/local 00:12:10.310 #undef SPDK_CONFIG_RAID5F 00:12:10.310 #undef SPDK_CONFIG_RBD 00:12:10.310 #define SPDK_CONFIG_RDMA 1 00:12:10.310 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:10.310 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:10.310 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:10.310 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:10.310 #define SPDK_CONFIG_SHARED 1 00:12:10.310 #undef SPDK_CONFIG_SMA 00:12:10.310 #define SPDK_CONFIG_TESTS 1 00:12:10.310 #undef SPDK_CONFIG_TSAN 00:12:10.310 #define SPDK_CONFIG_UBLK 1 00:12:10.310 #define SPDK_CONFIG_UBSAN 1 00:12:10.310 #undef SPDK_CONFIG_UNIT_TESTS 00:12:10.310 #undef SPDK_CONFIG_URING 00:12:10.310 #define SPDK_CONFIG_URING_PATH 00:12:10.310 #undef SPDK_CONFIG_URING_ZNS 00:12:10.310 #undef SPDK_CONFIG_USDT 00:12:10.310 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:10.310 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:10.310 #undef SPDK_CONFIG_VFIO_USER 00:12:10.310 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:10.310 #define SPDK_CONFIG_VHOST 1 00:12:10.310 #define SPDK_CONFIG_VIRTIO 1 00:12:10.310 #undef SPDK_CONFIG_VTUNE 00:12:10.310 #define SPDK_CONFIG_VTUNE_DIR 00:12:10.310 #define SPDK_CONFIG_WERROR 1 00:12:10.310 #define SPDK_CONFIG_WPDK_DIR 00:12:10.310 #define SPDK_CONFIG_XNVME 1 00:12:10.310 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:10.310 18:06:44 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:10.310 18:06:44 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.310 18:06:44 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.310 18:06:44 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.310 18:06:44 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.310 18:06:44 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.310 18:06:44 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.310 18:06:44 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.310 18:06:44 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.310 18:06:44 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:10.310 18:06:44 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.310 18:06:44 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:10.310 18:06:44 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:10.310 18:06:44 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:10.310 18:06:44 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:10.311 18:06:44 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:10.311 18:06:44 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81790 ]] 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81790 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.1I0vDx 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.1I0vDx/tests/xnvme /tmp/spdk.1I0vDx 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13373026304 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6209802240 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:10.312 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13373026304 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6209802240 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97301147648 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2401632256 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:10.313 * Looking for test storage... 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13373026304 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:10.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.313 --rc genhtml_branch_coverage=1 00:12:10.313 --rc genhtml_function_coverage=1 00:12:10.313 --rc genhtml_legend=1 00:12:10.313 --rc geninfo_all_blocks=1 00:12:10.313 --rc geninfo_unexecuted_blocks=1 00:12:10.313 00:12:10.313 ' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:10.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.313 --rc genhtml_branch_coverage=1 00:12:10.313 --rc genhtml_function_coverage=1 00:12:10.313 --rc genhtml_legend=1 00:12:10.313 --rc geninfo_all_blocks=1 00:12:10.313 --rc geninfo_unexecuted_blocks=1 00:12:10.313 00:12:10.313 ' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:10.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.313 --rc genhtml_branch_coverage=1 00:12:10.313 --rc genhtml_function_coverage=1 00:12:10.313 --rc genhtml_legend=1 00:12:10.313 --rc geninfo_all_blocks=1 00:12:10.313 --rc geninfo_unexecuted_blocks=1 00:12:10.313 00:12:10.313 ' 00:12:10.313 18:06:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:10.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.313 --rc genhtml_branch_coverage=1 00:12:10.313 --rc genhtml_function_coverage=1 00:12:10.313 --rc genhtml_legend=1 00:12:10.313 --rc geninfo_all_blocks=1 00:12:10.313 --rc geninfo_unexecuted_blocks=1 00:12:10.313 00:12:10.313 ' 00:12:10.313 18:06:44 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.313 18:06:44 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.313 18:06:44 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.313 18:06:44 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.314 18:06:44 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.314 18:06:44 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:10.314 18:06:44 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:10.314 18:06:44 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:10.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:10.837 Waiting for block devices as requested 00:12:10.837 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.837 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.099 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.099 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.387 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:16.387 18:06:50 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:16.648 18:06:50 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:16.648 18:06:50 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:16.648 18:06:50 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:16.648 18:06:50 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:16.648 18:06:50 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:16.648 18:06:50 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:16.648 18:06:50 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:16.648 No valid GPT data, bailing 00:12:16.648 18:06:50 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:16.648 18:06:51 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:16.648 18:06:51 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:16.648 18:06:51 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:16.648 18:06:51 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:16.648 18:06:51 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:16.648 18:06:51 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:16.910 18:06:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:16.910 18:06:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.910 18:06:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.910 18:06:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.910 ************************************ 00:12:16.910 START TEST xnvme_rpc 00:12:16.910 ************************************ 00:12:16.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82185 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82185 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82185 ']' 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.910 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:16.910 [2024-12-13 18:06:51.121351] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:16.910 [2024-12-13 18:06:51.121490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82185 ] 00:12:16.910 [2024-12-13 18:06:51.265372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.171 [2024-12-13 18:06:51.294444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.743 xnvme_bdev 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.743 18:06:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:17.743 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.004 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:18.004 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:18.004 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.004 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.004 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.004 18:06:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82185 00:12:18.004 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82185 ']' 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82185 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82185 00:12:18.005 killing process with pid 82185 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82185' 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82185 00:12:18.005 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82185 00:12:18.266 ************************************ 00:12:18.266 END TEST xnvme_rpc 00:12:18.266 ************************************ 00:12:18.266 00:12:18.266 real 0m1.402s 00:12:18.266 user 0m1.497s 00:12:18.266 sys 0m0.384s 00:12:18.266 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.266 18:06:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.266 18:06:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:18.266 18:06:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.266 18:06:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.266 18:06:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.266 ************************************ 00:12:18.266 START TEST xnvme_bdevperf 00:12:18.266 ************************************ 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:18.266 18:06:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:18.266 { 00:12:18.266 "subsystems": [ 00:12:18.266 { 00:12:18.266 "subsystem": "bdev", 00:12:18.266 "config": [ 00:12:18.266 { 00:12:18.266 "params": { 00:12:18.266 "io_mechanism": "libaio", 00:12:18.266 "conserve_cpu": false, 00:12:18.266 "filename": "/dev/nvme0n1", 00:12:18.266 "name": "xnvme_bdev" 00:12:18.266 }, 00:12:18.266 "method": "bdev_xnvme_create" 00:12:18.266 }, 00:12:18.266 { 00:12:18.266 "method": "bdev_wait_for_examine" 00:12:18.266 } 00:12:18.266 ] 00:12:18.266 } 00:12:18.266 ] 00:12:18.266 } 00:12:18.266 [2024-12-13 18:06:52.566834] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:18.266 [2024-12-13 18:06:52.567289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82237 ] 00:12:18.528 [2024-12-13 18:06:52.711934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.528 [2024-12-13 18:06:52.741594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.528 Running I/O for 5 seconds... 00:12:20.488 26213.00 IOPS, 102.39 MiB/s [2024-12-13T18:06:56.252Z] 27091.00 IOPS, 105.82 MiB/s [2024-12-13T18:06:57.196Z] 26410.00 IOPS, 103.16 MiB/s [2024-12-13T18:06:58.139Z] 26071.75 IOPS, 101.84 MiB/s 00:12:23.762 Latency(us) 00:12:23.762 [2024-12-13T18:06:58.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.762 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:23.762 xnvme_bdev : 5.00 25959.16 101.40 0.00 0.00 2460.23 373.37 6805.66 00:12:23.762 [2024-12-13T18:06:58.139Z] =================================================================================================================== 00:12:23.762 [2024-12-13T18:06:58.139Z] Total : 25959.16 101.40 0.00 0.00 2460.23 373.37 6805.66 00:12:23.762 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:23.763 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:23.763 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:23.763 18:06:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:23.763 18:06:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:23.763 { 00:12:23.763 "subsystems": [ 00:12:23.763 { 00:12:23.763 "subsystem": "bdev", 00:12:23.763 "config": [ 00:12:23.763 { 00:12:23.763 "params": { 00:12:23.763 "io_mechanism": "libaio", 00:12:23.763 "conserve_cpu": false, 00:12:23.763 "filename": "/dev/nvme0n1", 00:12:23.763 "name": "xnvme_bdev" 00:12:23.763 }, 00:12:23.763 "method": "bdev_xnvme_create" 00:12:23.763 }, 00:12:23.763 { 00:12:23.763 "method": "bdev_wait_for_examine" 00:12:23.763 } 00:12:23.763 ] 00:12:23.763 } 00:12:23.763 ] 00:12:23.763 } 00:12:23.763 [2024-12-13 18:06:58.125324] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:23.763 [2024-12-13 18:06:58.125451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82307 ] 00:12:24.024 [2024-12-13 18:06:58.273961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.024 [2024-12-13 18:06:58.302460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.285 Running I/O for 5 seconds... 00:12:26.216 35976.00 IOPS, 140.53 MiB/s [2024-12-13T18:07:01.532Z] 33958.50 IOPS, 132.65 MiB/s [2024-12-13T18:07:02.472Z] 33589.67 IOPS, 131.21 MiB/s [2024-12-13T18:07:03.854Z] 34068.00 IOPS, 133.08 MiB/s 00:12:29.477 Latency(us) 00:12:29.477 [2024-12-13T18:07:03.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.477 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:29.477 xnvme_bdev : 5.00 34481.32 134.69 0.00 0.00 1851.65 422.20 7360.20 00:12:29.477 [2024-12-13T18:07:03.854Z] =================================================================================================================== 00:12:29.477 [2024-12-13T18:07:03.854Z] Total : 34481.32 134.69 0.00 0.00 1851.65 422.20 7360.20 00:12:29.477 00:12:29.477 real 0m11.111s 00:12:29.477 user 0m3.430s 00:12:29.477 sys 0m6.090s 00:12:29.477 18:07:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.477 18:07:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:29.477 ************************************ 00:12:29.477 END TEST xnvme_bdevperf 00:12:29.477 ************************************ 00:12:29.477 18:07:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:29.477 18:07:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:29.477 18:07:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.477 18:07:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.477 ************************************ 00:12:29.477 START TEST xnvme_fio_plugin 00:12:29.477 ************************************ 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:29.477 18:07:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:29.477 { 00:12:29.477 "subsystems": [ 00:12:29.477 { 00:12:29.477 "subsystem": "bdev", 00:12:29.477 "config": [ 00:12:29.477 { 00:12:29.477 "params": { 00:12:29.477 "io_mechanism": "libaio", 00:12:29.477 "conserve_cpu": false, 00:12:29.477 "filename": "/dev/nvme0n1", 00:12:29.477 "name": "xnvme_bdev" 00:12:29.477 }, 00:12:29.477 "method": "bdev_xnvme_create" 00:12:29.477 }, 00:12:29.477 { 00:12:29.477 "method": "bdev_wait_for_examine" 00:12:29.477 } 00:12:29.477 ] 00:12:29.477 } 00:12:29.477 ] 00:12:29.477 } 00:12:29.738 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:29.738 fio-3.35 00:12:29.738 Starting 1 thread 00:12:35.021 00:12:35.021 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82415: Fri Dec 13 18:07:09 2024 00:12:35.021 read: IOPS=32.2k, BW=126MiB/s (132MB/s)(629MiB/5001msec) 00:12:35.021 slat (usec): min=4, max=2279, avg=18.68, stdev=95.57 00:12:35.021 clat (usec): min=115, max=5376, avg=1469.79, stdev=507.08 00:12:35.021 lat (usec): min=184, max=5460, avg=1488.47, stdev=495.53 00:12:35.021 clat percentiles (usec): 00:12:35.021 | 1.00th=[ 314], 5.00th=[ 619], 10.00th=[ 807], 20.00th=[ 1045], 00:12:35.021 | 30.00th=[ 1221], 40.00th=[ 1369], 50.00th=[ 1500], 60.00th=[ 1614], 00:12:35.021 | 70.00th=[ 1713], 80.00th=[ 1860], 90.00th=[ 2057], 95.00th=[ 2278], 00:12:35.021 | 99.00th=[ 2835], 99.50th=[ 3032], 99.90th=[ 3621], 99.95th=[ 3982], 00:12:35.021 | 99.99th=[ 4752] 00:12:35.021 bw ( KiB/s): min=124192, max=134568, per=100.00%, avg=129298.67, stdev=3304.09, samples=9 00:12:35.021 iops : min=31048, max=33642, avg=32324.67, stdev=826.02, samples=9 00:12:35.021 lat (usec) : 250=0.47%, 500=2.35%, 750=5.48%, 1000=9.49% 00:12:35.021 lat (msec) : 2=69.96%, 4=12.22%, 10=0.05% 00:12:35.021 cpu : usr=52.50%, sys=40.50%, ctx=12, majf=0, minf=1065 00:12:35.021 IO depths : 1=0.8%, 2=1.7%, 4=3.7%, 8=8.7%, 16=22.5%, 32=60.6%, >=64=2.1% 00:12:35.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.021 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:35.021 issued rwts: total=160969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.021 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:35.021 00:12:35.021 Run status group 0 (all jobs): 00:12:35.021 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=629MiB (659MB), run=5001-5001msec 00:12:35.594 ----------------------------------------------------- 00:12:35.594 Suppressions used: 00:12:35.594 count bytes template 00:12:35.594 1 11 /usr/src/fio/parse.c 00:12:35.594 1 8 libtcmalloc_minimal.so 00:12:35.594 1 904 libcrypto.so 00:12:35.594 ----------------------------------------------------- 00:12:35.594 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:35.594 18:07:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:35.595 18:07:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:35.595 { 00:12:35.595 "subsystems": [ 00:12:35.595 { 00:12:35.595 "subsystem": "bdev", 00:12:35.595 "config": [ 00:12:35.595 { 00:12:35.595 "params": { 00:12:35.595 "io_mechanism": "libaio", 00:12:35.595 "conserve_cpu": false, 00:12:35.595 "filename": "/dev/nvme0n1", 00:12:35.595 "name": "xnvme_bdev" 00:12:35.595 }, 00:12:35.595 "method": "bdev_xnvme_create" 00:12:35.595 }, 00:12:35.595 { 00:12:35.595 "method": "bdev_wait_for_examine" 00:12:35.595 } 00:12:35.595 ] 00:12:35.595 } 00:12:35.595 ] 00:12:35.595 } 00:12:35.595 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:35.595 fio-3.35 00:12:35.595 Starting 1 thread 00:12:42.186 00:12:42.186 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82501: Fri Dec 13 18:07:15 2024 00:12:42.186 write: IOPS=32.3k, BW=126MiB/s (132MB/s)(633MiB/5019msec); 0 zone resets 00:12:42.186 slat (usec): min=4, max=2133, avg=20.01, stdev=90.77 00:12:42.186 clat (usec): min=70, max=41383, avg=1438.79, stdev=1634.62 00:12:42.186 lat (usec): min=80, max=41405, avg=1458.79, stdev=1631.15 00:12:42.186 clat percentiles (usec): 00:12:42.186 | 1.00th=[ 289], 5.00th=[ 529], 10.00th=[ 685], 20.00th=[ 906], 00:12:42.186 | 30.00th=[ 1074], 40.00th=[ 1205], 50.00th=[ 1319], 60.00th=[ 1450], 00:12:42.186 | 70.00th=[ 1598], 80.00th=[ 1745], 90.00th=[ 1991], 95.00th=[ 2212], 00:12:42.186 | 99.00th=[ 2933], 99.50th=[ 4015], 99.90th=[26346], 99.95th=[28181], 00:12:42.186 | 99.99th=[34866] 00:12:42.186 bw ( KiB/s): min=69016, max=143696, per=100.00%, avg=129592.90, stdev=21849.25, samples=10 00:12:42.186 iops : min=17254, max=35924, avg=32398.20, stdev=5462.30, samples=10 00:12:42.186 lat (usec) : 100=0.01%, 250=0.63%, 500=3.80%, 750=8.39%, 1000=12.56% 00:12:42.186 lat (msec) : 2=65.01%, 4=9.10%, 10=0.07%, 20=0.02%, 50=0.41% 00:12:42.186 cpu : usr=46.89%, sys=44.92%, ctx=13, majf=0, minf=1066 00:12:42.186 IO depths : 1=0.5%, 2=1.3%, 4=3.1%, 8=8.0%, 16=21.9%, 32=62.7%, >=64=2.5% 00:12:42.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.186 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:42.186 issued rwts: total=0,162089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.186 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:42.186 00:12:42.186 Run status group 0 (all jobs): 00:12:42.186 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=633MiB (664MB), run=5019-5019msec 00:12:42.186 ----------------------------------------------------- 00:12:42.186 Suppressions used: 00:12:42.186 count bytes template 00:12:42.186 1 11 /usr/src/fio/parse.c 00:12:42.186 1 8 libtcmalloc_minimal.so 00:12:42.186 1 904 libcrypto.so 00:12:42.186 ----------------------------------------------------- 00:12:42.186 00:12:42.186 ************************************ 00:12:42.186 END TEST xnvme_fio_plugin 00:12:42.186 ************************************ 00:12:42.186 00:12:42.186 real 0m12.058s 00:12:42.186 user 0m6.087s 00:12:42.186 sys 0m4.829s 00:12:42.186 18:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.186 18:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:42.186 18:07:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:42.187 18:07:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:42.187 18:07:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:42.187 18:07:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:42.187 18:07:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:42.187 18:07:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.187 18:07:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.187 ************************************ 00:12:42.187 START TEST xnvme_rpc 00:12:42.187 ************************************ 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:42.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82576 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82576 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82576 ']' 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.187 18:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.187 [2024-12-13 18:07:15.898419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:42.187 [2024-12-13 18:07:15.898795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82576 ] 00:12:42.187 [2024-12-13 18:07:16.044323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.187 [2024-12-13 18:07:16.072964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.447 xnvme_bdev 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:42.447 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82576 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82576 ']' 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82576 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82576 00:12:42.707 killing process with pid 82576 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82576' 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82576 00:12:42.707 18:07:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82576 00:12:42.967 ************************************ 00:12:42.967 END TEST xnvme_rpc 00:12:42.967 ************************************ 00:12:42.967 00:12:42.967 real 0m1.421s 00:12:42.967 user 0m1.465s 00:12:42.967 sys 0m0.414s 00:12:42.967 18:07:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.967 18:07:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 18:07:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:42.967 18:07:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:42.967 18:07:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.967 18:07:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.967 ************************************ 00:12:42.967 START TEST xnvme_bdevperf 00:12:42.968 ************************************ 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:42.968 18:07:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.968 { 00:12:42.968 "subsystems": [ 00:12:42.968 { 00:12:42.968 "subsystem": "bdev", 00:12:42.968 "config": [ 00:12:42.968 { 00:12:42.968 "params": { 00:12:42.968 "io_mechanism": "libaio", 00:12:42.968 "conserve_cpu": true, 00:12:42.968 "filename": "/dev/nvme0n1", 00:12:42.968 "name": "xnvme_bdev" 00:12:42.968 }, 00:12:42.968 "method": "bdev_xnvme_create" 00:12:42.968 }, 00:12:42.968 { 00:12:42.968 "method": "bdev_wait_for_examine" 00:12:42.968 } 00:12:42.968 ] 00:12:42.968 } 00:12:42.968 ] 00:12:42.968 } 00:12:43.228 [2024-12-13 18:07:17.369105] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:43.228 [2024-12-13 18:07:17.369261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82634 ] 00:12:43.228 [2024-12-13 18:07:17.518473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.228 [2024-12-13 18:07:17.546945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.489 Running I/O for 5 seconds... 00:12:45.377 34735.00 IOPS, 135.68 MiB/s [2024-12-13T18:07:20.695Z] 35648.50 IOPS, 139.25 MiB/s [2024-12-13T18:07:22.082Z] 35690.67 IOPS, 139.42 MiB/s [2024-12-13T18:07:22.690Z] 35551.00 IOPS, 138.87 MiB/s [2024-12-13T18:07:22.690Z] 35273.40 IOPS, 137.79 MiB/s 00:12:48.313 Latency(us) 00:12:48.313 [2024-12-13T18:07:22.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.313 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:48.313 xnvme_bdev : 5.01 35238.47 137.65 0.00 0.00 1811.39 89.01 14014.62 00:12:48.313 [2024-12-13T18:07:22.690Z] =================================================================================================================== 00:12:48.313 [2024-12-13T18:07:22.690Z] Total : 35238.47 137.65 0.00 0.00 1811.39 89.01 14014.62 00:12:48.573 18:07:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.573 18:07:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:48.573 18:07:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:48.573 18:07:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:48.573 18:07:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.573 { 00:12:48.573 "subsystems": [ 00:12:48.573 { 00:12:48.573 "subsystem": "bdev", 00:12:48.573 "config": [ 00:12:48.573 { 00:12:48.573 "params": { 00:12:48.573 "io_mechanism": "libaio", 00:12:48.573 "conserve_cpu": true, 00:12:48.573 "filename": "/dev/nvme0n1", 00:12:48.573 "name": "xnvme_bdev" 00:12:48.573 }, 00:12:48.573 "method": "bdev_xnvme_create" 00:12:48.573 }, 00:12:48.573 { 00:12:48.573 "method": "bdev_wait_for_examine" 00:12:48.573 } 00:12:48.573 ] 00:12:48.573 } 00:12:48.573 ] 00:12:48.573 } 00:12:48.573 [2024-12-13 18:07:22.938749] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:48.573 [2024-12-13 18:07:22.939163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82698 ] 00:12:48.835 [2024-12-13 18:07:23.079034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.835 [2024-12-13 18:07:23.107421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.096 Running I/O for 5 seconds... 00:12:51.098 6163.00 IOPS, 24.07 MiB/s [2024-12-13T18:07:26.416Z] 6116.00 IOPS, 23.89 MiB/s [2024-12-13T18:07:27.360Z] 6239.00 IOPS, 24.37 MiB/s [2024-12-13T18:07:28.298Z] 6232.00 IOPS, 24.34 MiB/s [2024-12-13T18:07:28.298Z] 6396.60 IOPS, 24.99 MiB/s 00:12:53.921 Latency(us) 00:12:53.921 [2024-12-13T18:07:28.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.921 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:53.921 xnvme_bdev : 5.01 6394.92 24.98 0.00 0.00 9992.45 44.90 28432.54 00:12:53.921 [2024-12-13T18:07:28.298Z] =================================================================================================================== 00:12:53.921 [2024-12-13T18:07:28.298Z] Total : 6394.92 24.98 0.00 0.00 9992.45 44.90 28432.54 00:12:54.179 00:12:54.179 real 0m11.095s 00:12:54.179 user 0m6.554s 00:12:54.179 sys 0m3.399s 00:12:54.179 18:07:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.179 ************************************ 00:12:54.179 END TEST xnvme_bdevperf 00:12:54.179 ************************************ 00:12:54.179 18:07:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:54.179 18:07:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:54.179 18:07:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:54.179 18:07:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.179 18:07:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.179 ************************************ 00:12:54.179 START TEST xnvme_fio_plugin 00:12:54.179 ************************************ 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:54.179 18:07:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:54.179 { 00:12:54.179 "subsystems": [ 00:12:54.179 { 00:12:54.179 "subsystem": "bdev", 00:12:54.179 "config": [ 00:12:54.179 { 00:12:54.179 "params": { 00:12:54.179 "io_mechanism": "libaio", 00:12:54.179 "conserve_cpu": true, 00:12:54.179 "filename": "/dev/nvme0n1", 00:12:54.179 "name": "xnvme_bdev" 00:12:54.179 }, 00:12:54.179 "method": "bdev_xnvme_create" 00:12:54.179 }, 00:12:54.179 { 00:12:54.179 "method": "bdev_wait_for_examine" 00:12:54.179 } 00:12:54.179 ] 00:12:54.179 } 00:12:54.179 ] 00:12:54.179 } 00:12:54.438 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:54.438 fio-3.35 00:12:54.438 Starting 1 thread 00:12:59.716 00:12:59.716 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82806: Fri Dec 13 18:07:34 2024 00:12:59.716 read: IOPS=37.1k, BW=145MiB/s (152MB/s)(724MiB/5001msec) 00:12:59.716 slat (usec): min=4, max=2124, avg=19.42, stdev=81.88 00:12:59.716 clat (usec): min=107, max=4456, avg=1199.73, stdev=508.85 00:12:59.716 lat (usec): min=164, max=4646, avg=1219.15, stdev=502.47 00:12:59.716 clat percentiles (usec): 00:12:59.716 | 1.00th=[ 239], 5.00th=[ 445], 10.00th=[ 578], 20.00th=[ 775], 00:12:59.716 | 30.00th=[ 914], 40.00th=[ 1045], 50.00th=[ 1172], 60.00th=[ 1303], 00:12:59.716 | 70.00th=[ 1434], 80.00th=[ 1582], 90.00th=[ 1811], 95.00th=[ 2040], 00:12:59.716 | 99.00th=[ 2737], 99.50th=[ 3064], 99.90th=[ 3687], 99.95th=[ 3818], 00:12:59.716 | 99.99th=[ 4228] 00:12:59.716 bw ( KiB/s): min=134296, max=174856, per=100.00%, avg=149740.44, stdev=13053.59, samples=9 00:12:59.716 iops : min=33574, max=43714, avg=37435.11, stdev=3263.40, samples=9 00:12:59.716 lat (usec) : 250=1.19%, 500=5.54%, 750=12.06%, 1000=17.68% 00:12:59.716 lat (msec) : 2=57.87%, 4=5.64%, 10=0.02% 00:12:59.716 cpu : usr=41.34%, sys=50.42%, ctx=13, majf=0, minf=1065 00:12:59.716 IO depths : 1=0.5%, 2=1.2%, 4=3.3%, 8=8.9%, 16=23.5%, 32=60.5%, >=64=2.1% 00:12:59.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.716 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:59.716 issued rwts: total=185311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.716 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:59.716 00:12:59.716 Run status group 0 (all jobs): 00:12:59.716 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=724MiB (759MB), run=5001-5001msec 00:13:00.287 ----------------------------------------------------- 00:13:00.287 Suppressions used: 00:13:00.287 count bytes template 00:13:00.287 1 11 /usr/src/fio/parse.c 00:13:00.287 1 8 libtcmalloc_minimal.so 00:13:00.287 1 904 libcrypto.so 00:13:00.287 ----------------------------------------------------- 00:13:00.287 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:00.287 18:07:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.287 { 00:13:00.287 "subsystems": [ 00:13:00.287 { 00:13:00.287 "subsystem": "bdev", 00:13:00.287 "config": [ 00:13:00.287 { 00:13:00.287 "params": { 00:13:00.287 "io_mechanism": "libaio", 00:13:00.287 "conserve_cpu": true, 00:13:00.287 "filename": "/dev/nvme0n1", 00:13:00.287 "name": "xnvme_bdev" 00:13:00.287 }, 00:13:00.287 "method": "bdev_xnvme_create" 00:13:00.287 }, 00:13:00.287 { 00:13:00.287 "method": "bdev_wait_for_examine" 00:13:00.287 } 00:13:00.287 ] 00:13:00.287 } 00:13:00.287 ] 00:13:00.287 } 00:13:00.287 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:00.287 fio-3.35 00:13:00.287 Starting 1 thread 00:13:06.870 00:13:06.870 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82887: Fri Dec 13 18:07:40 2024 00:13:06.870 write: IOPS=37.2k, BW=145MiB/s (152MB/s)(728MiB/5010msec); 0 zone resets 00:13:06.870 slat (usec): min=4, max=2111, avg=19.70, stdev=75.64 00:13:06.870 clat (usec): min=8, max=25099, avg=1180.99, stdev=1067.37 00:13:06.870 lat (usec): min=74, max=25115, avg=1200.70, stdev=1065.03 00:13:06.870 clat percentiles (usec): 00:13:06.870 | 1.00th=[ 235], 5.00th=[ 383], 10.00th=[ 523], 20.00th=[ 693], 00:13:06.870 | 30.00th=[ 840], 40.00th=[ 963], 50.00th=[ 1074], 60.00th=[ 1205], 00:13:06.870 | 70.00th=[ 1352], 80.00th=[ 1516], 90.00th=[ 1795], 95.00th=[ 2073], 00:13:06.870 | 99.00th=[ 2868], 99.50th=[ 3294], 99.90th=[19268], 99.95th=[20579], 00:13:06.870 | 99.99th=[23987] 00:13:06.870 bw ( KiB/s): min=116952, max=161008, per=100.00%, avg=149099.20, stdev=13355.57, samples=10 00:13:06.870 iops : min=29238, max=40252, avg=37274.80, stdev=3338.89, samples=10 00:13:06.870 lat (usec) : 10=0.01%, 100=0.01%, 250=1.31%, 500=7.73%, 750=14.90% 00:13:06.870 lat (usec) : 1000=19.30% 00:13:06.870 lat (msec) : 2=50.85%, 4=5.57%, 10=0.04%, 20=0.23%, 50=0.07% 00:13:06.870 cpu : usr=41.47%, sys=49.45%, ctx=17, majf=0, minf=1066 00:13:06.870 IO depths : 1=0.4%, 2=1.1%, 4=3.1%, 8=8.8%, 16=23.7%, 32=60.7%, >=64=2.2% 00:13:06.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.870 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:13:06.870 issued rwts: total=0,186437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.870 00:13:06.870 Run status group 0 (all jobs): 00:13:06.870 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=728MiB (764MB), run=5010-5010msec 00:13:06.870 ----------------------------------------------------- 00:13:06.870 Suppressions used: 00:13:06.870 count bytes template 00:13:06.870 1 11 /usr/src/fio/parse.c 00:13:06.870 1 8 libtcmalloc_minimal.so 00:13:06.870 1 904 libcrypto.so 00:13:06.870 ----------------------------------------------------- 00:13:06.870 00:13:06.870 ************************************ 00:13:06.870 END TEST xnvme_fio_plugin 00:13:06.870 ************************************ 00:13:06.870 00:13:06.870 real 0m11.932s 00:13:06.870 user 0m5.205s 00:13:06.870 sys 0m5.471s 00:13:06.870 18:07:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.870 18:07:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:06.870 18:07:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:06.870 18:07:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:06.870 18:07:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.870 18:07:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.870 ************************************ 00:13:06.870 START TEST xnvme_rpc 00:13:06.870 ************************************ 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82962 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82962 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82962 ']' 00:13:06.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.870 18:07:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.870 [2024-12-13 18:07:40.524851] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:06.870 [2024-12-13 18:07:40.524973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82962 ] 00:13:06.870 [2024-12-13 18:07:40.670537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.870 [2024-12-13 18:07:40.694902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.132 xnvme_bdev 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.132 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82962 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82962 ']' 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82962 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82962 00:13:07.394 killing process with pid 82962 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82962' 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82962 00:13:07.394 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82962 00:13:07.655 ************************************ 00:13:07.655 END TEST xnvme_rpc 00:13:07.655 ************************************ 00:13:07.655 00:13:07.655 real 0m1.406s 00:13:07.655 user 0m1.544s 00:13:07.655 sys 0m0.331s 00:13:07.655 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.655 18:07:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.655 18:07:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:07.655 18:07:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.655 18:07:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.655 18:07:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.655 ************************************ 00:13:07.655 START TEST xnvme_bdevperf 00:13:07.655 ************************************ 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.655 18:07:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.655 { 00:13:07.655 "subsystems": [ 00:13:07.655 { 00:13:07.655 "subsystem": "bdev", 00:13:07.655 "config": [ 00:13:07.655 { 00:13:07.655 "params": { 00:13:07.655 "io_mechanism": "io_uring", 00:13:07.655 "conserve_cpu": false, 00:13:07.655 "filename": "/dev/nvme0n1", 00:13:07.655 "name": "xnvme_bdev" 00:13:07.655 }, 00:13:07.655 "method": "bdev_xnvme_create" 00:13:07.655 }, 00:13:07.655 { 00:13:07.655 "method": "bdev_wait_for_examine" 00:13:07.655 } 00:13:07.655 ] 00:13:07.655 } 00:13:07.655 ] 00:13:07.655 } 00:13:07.655 [2024-12-13 18:07:41.992919] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:07.655 [2024-12-13 18:07:41.993061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83025 ] 00:13:07.916 [2024-12-13 18:07:42.139201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.916 [2024-12-13 18:07:42.168006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.916 Running I/O for 5 seconds... 00:13:10.247 32766.00 IOPS, 127.99 MiB/s [2024-12-13T18:07:45.570Z] 33451.50 IOPS, 130.67 MiB/s [2024-12-13T18:07:46.548Z] 33356.67 IOPS, 130.30 MiB/s [2024-12-13T18:07:47.508Z] 32837.50 IOPS, 128.27 MiB/s 00:13:13.132 Latency(us) 00:13:13.132 [2024-12-13T18:07:47.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.132 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:13.132 xnvme_bdev : 5.00 32620.12 127.42 0.00 0.00 1957.60 315.08 16535.24 00:13:13.132 [2024-12-13T18:07:47.509Z] =================================================================================================================== 00:13:13.132 [2024-12-13T18:07:47.509Z] Total : 32620.12 127.42 0.00 0.00 1957.60 315.08 16535.24 00:13:13.132 18:07:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.132 18:07:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:13.132 18:07:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:13.132 18:07:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:13.132 18:07:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.132 { 00:13:13.132 "subsystems": [ 00:13:13.132 { 00:13:13.132 "subsystem": "bdev", 00:13:13.132 "config": [ 00:13:13.132 { 00:13:13.132 "params": { 00:13:13.132 "io_mechanism": "io_uring", 00:13:13.132 "conserve_cpu": false, 00:13:13.132 "filename": "/dev/nvme0n1", 00:13:13.132 "name": "xnvme_bdev" 00:13:13.132 }, 00:13:13.132 "method": "bdev_xnvme_create" 00:13:13.132 }, 00:13:13.132 { 00:13:13.132 "method": "bdev_wait_for_examine" 00:13:13.132 } 00:13:13.132 ] 00:13:13.132 } 00:13:13.132 ] 00:13:13.132 } 00:13:13.391 [2024-12-13 18:07:47.525221] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:13.391 [2024-12-13 18:07:47.525391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83089 ] 00:13:13.391 [2024-12-13 18:07:47.670327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.391 [2024-12-13 18:07:47.698979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.652 Running I/O for 5 seconds... 00:13:15.535 9297.00 IOPS, 36.32 MiB/s [2024-12-13T18:07:50.854Z] 9238.50 IOPS, 36.09 MiB/s [2024-12-13T18:07:51.796Z] 9426.33 IOPS, 36.82 MiB/s [2024-12-13T18:07:53.182Z] 9466.75 IOPS, 36.98 MiB/s [2024-12-13T18:07:53.182Z] 9425.40 IOPS, 36.82 MiB/s 00:13:18.805 Latency(us) 00:13:18.805 [2024-12-13T18:07:53.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.805 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:18.805 xnvme_bdev : 5.01 9414.83 36.78 0.00 0.00 6785.89 71.29 29037.49 00:13:18.805 [2024-12-13T18:07:53.182Z] =================================================================================================================== 00:13:18.805 [2024-12-13T18:07:53.182Z] Total : 9414.83 36.78 0.00 0.00 6785.89 71.29 29037.49 00:13:18.805 ************************************ 00:13:18.805 END TEST xnvme_bdevperf 00:13:18.805 ************************************ 00:13:18.805 00:13:18.805 real 0m11.056s 00:13:18.805 user 0m4.349s 00:13:18.805 sys 0m6.461s 00:13:18.805 18:07:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.805 18:07:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.805 18:07:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:18.805 18:07:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:18.805 18:07:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.805 18:07:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.805 ************************************ 00:13:18.805 START TEST xnvme_fio_plugin 00:13:18.805 ************************************ 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:18.805 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:18.806 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:18.806 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:18.806 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:18.806 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:18.806 18:07:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.806 { 00:13:18.806 "subsystems": [ 00:13:18.806 { 00:13:18.806 "subsystem": "bdev", 00:13:18.806 "config": [ 00:13:18.806 { 00:13:18.806 "params": { 00:13:18.806 "io_mechanism": "io_uring", 00:13:18.806 "conserve_cpu": false, 00:13:18.806 "filename": "/dev/nvme0n1", 00:13:18.806 "name": "xnvme_bdev" 00:13:18.806 }, 00:13:18.806 "method": "bdev_xnvme_create" 00:13:18.806 }, 00:13:18.806 { 00:13:18.806 "method": "bdev_wait_for_examine" 00:13:18.806 } 00:13:18.806 ] 00:13:18.806 } 00:13:18.806 ] 00:13:18.806 } 00:13:19.066 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:19.066 fio-3.35 00:13:19.066 Starting 1 thread 00:13:24.357 00:13:24.357 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83196: Fri Dec 13 18:07:58 2024 00:13:24.357 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(656MiB/5001msec) 00:13:24.357 slat (usec): min=2, max=118, avg= 4.10, stdev= 2.23 00:13:24.357 clat (usec): min=922, max=3397, avg=1739.86, stdev=296.18 00:13:24.357 lat (usec): min=925, max=3422, avg=1743.96, stdev=296.68 00:13:24.357 clat percentiles (usec): 00:13:24.357 | 1.00th=[ 1221], 5.00th=[ 1336], 10.00th=[ 1418], 20.00th=[ 1500], 00:13:24.357 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1696], 60.00th=[ 1762], 00:13:24.357 | 70.00th=[ 1860], 80.00th=[ 1975], 90.00th=[ 2147], 95.00th=[ 2311], 00:13:24.357 | 99.00th=[ 2606], 99.50th=[ 2737], 99.90th=[ 2999], 99.95th=[ 3097], 00:13:24.357 | 99.99th=[ 3261] 00:13:24.357 bw ( KiB/s): min=128512, max=139264, per=100.00%, avg=134542.22, stdev=3432.48, samples=9 00:13:24.357 iops : min=32128, max=34816, avg=33635.56, stdev=858.12, samples=9 00:13:24.357 lat (usec) : 1000=0.01% 00:13:24.357 lat (msec) : 2=82.42%, 4=17.57% 00:13:24.357 cpu : usr=31.74%, sys=66.82%, ctx=13, majf=0, minf=1063 00:13:24.357 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:24.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.357 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:24.357 issued rwts: total=168000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.357 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:24.357 00:13:24.357 Run status group 0 (all jobs): 00:13:24.357 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=656MiB (688MB), run=5001-5001msec 00:13:24.930 ----------------------------------------------------- 00:13:24.930 Suppressions used: 00:13:24.930 count bytes template 00:13:24.930 1 11 /usr/src/fio/parse.c 00:13:24.930 1 8 libtcmalloc_minimal.so 00:13:24.930 1 904 libcrypto.so 00:13:24.930 ----------------------------------------------------- 00:13:24.930 00:13:24.930 18:07:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.930 18:07:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.930 18:07:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:24.930 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.930 18:07:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:24.930 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:24.931 18:07:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.931 { 00:13:24.931 "subsystems": [ 00:13:24.931 { 00:13:24.931 "subsystem": "bdev", 00:13:24.931 "config": [ 00:13:24.931 { 00:13:24.931 "params": { 00:13:24.931 "io_mechanism": "io_uring", 00:13:24.931 "conserve_cpu": false, 00:13:24.931 "filename": "/dev/nvme0n1", 00:13:24.931 "name": "xnvme_bdev" 00:13:24.931 }, 00:13:24.931 "method": "bdev_xnvme_create" 00:13:24.931 }, 00:13:24.931 { 00:13:24.931 "method": "bdev_wait_for_examine" 00:13:24.931 } 00:13:24.931 ] 00:13:24.931 } 00:13:24.931 ] 00:13:24.931 } 00:13:24.931 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:24.931 fio-3.35 00:13:24.931 Starting 1 thread 00:13:31.518 00:13:31.518 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83278: Fri Dec 13 18:08:04 2024 00:13:31.518 write: IOPS=35.8k, BW=140MiB/s (147MB/s)(699MiB/5001msec); 0 zone resets 00:13:31.518 slat (nsec): min=2910, max=94391, avg=4139.72, stdev=2212.60 00:13:31.518 clat (usec): min=159, max=10501, avg=1622.31, stdev=310.96 00:13:31.518 lat (usec): min=163, max=10504, avg=1626.45, stdev=311.39 00:13:31.518 clat percentiles (usec): 00:13:31.518 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1287], 20.00th=[ 1385], 00:13:31.518 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1582], 60.00th=[ 1647], 00:13:31.518 | 70.00th=[ 1729], 80.00th=[ 1844], 90.00th=[ 2008], 95.00th=[ 2180], 00:13:31.518 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 3163], 99.95th=[ 3556], 00:13:31.518 | 99.99th=[ 7570] 00:13:31.518 bw ( KiB/s): min=132096, max=163176, per=100.00%, avg=144306.67, stdev=10275.60, samples=9 00:13:31.518 iops : min=33024, max=40794, avg=36076.67, stdev=2568.90, samples=9 00:13:31.518 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.07% 00:13:31.518 lat (msec) : 2=89.42%, 4=10.42%, 10=0.03%, 20=0.01% 00:13:31.518 cpu : usr=34.52%, sys=64.12%, ctx=11, majf=0, minf=1064 00:13:31.518 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:31.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.518 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:31.518 issued rwts: total=0,178949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:31.518 00:13:31.518 Run status group 0 (all jobs): 00:13:31.518 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=699MiB (733MB), run=5001-5001msec 00:13:31.518 ----------------------------------------------------- 00:13:31.518 Suppressions used: 00:13:31.518 count bytes template 00:13:31.518 1 11 /usr/src/fio/parse.c 00:13:31.518 1 8 libtcmalloc_minimal.so 00:13:31.518 1 904 libcrypto.so 00:13:31.518 ----------------------------------------------------- 00:13:31.518 00:13:31.518 00:13:31.518 real 0m11.979s 00:13:31.518 user 0m4.441s 00:13:31.518 sys 0m7.083s 00:13:31.518 ************************************ 00:13:31.518 END TEST xnvme_fio_plugin 00:13:31.518 ************************************ 00:13:31.518 18:08:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.518 18:08:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:31.518 18:08:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:31.518 18:08:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:31.519 18:08:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:31.519 18:08:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:31.519 18:08:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.519 18:08:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.519 18:08:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.519 ************************************ 00:13:31.519 START TEST xnvme_rpc 00:13:31.519 ************************************ 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83353 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83353 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83353 ']' 00:13:31.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.519 18:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.519 [2024-12-13 18:08:05.194511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:31.519 [2024-12-13 18:08:05.194667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83353 ] 00:13:31.519 [2024-12-13 18:08:05.340227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.519 [2024-12-13 18:08:05.369470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.779 xnvme_bdev 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:31.779 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.780 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.780 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:31.780 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83353 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83353 ']' 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83353 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83353 00:13:32.040 killing process with pid 83353 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83353' 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83353 00:13:32.040 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83353 00:13:32.301 00:13:32.301 real 0m1.429s 00:13:32.301 user 0m1.504s 00:13:32.301 sys 0m0.408s 00:13:32.301 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.301 ************************************ 00:13:32.301 END TEST xnvme_rpc 00:13:32.301 ************************************ 00:13:32.301 18:08:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.301 18:08:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:32.301 18:08:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:32.301 18:08:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.301 18:08:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.301 ************************************ 00:13:32.301 START TEST xnvme_bdevperf 00:13:32.301 ************************************ 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:32.301 18:08:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:32.301 { 00:13:32.301 "subsystems": [ 00:13:32.301 { 00:13:32.301 "subsystem": "bdev", 00:13:32.301 "config": [ 00:13:32.301 { 00:13:32.301 "params": { 00:13:32.301 "io_mechanism": "io_uring", 00:13:32.301 "conserve_cpu": true, 00:13:32.301 "filename": "/dev/nvme0n1", 00:13:32.301 "name": "xnvme_bdev" 00:13:32.301 }, 00:13:32.301 "method": "bdev_xnvme_create" 00:13:32.301 }, 00:13:32.301 { 00:13:32.301 "method": "bdev_wait_for_examine" 00:13:32.301 } 00:13:32.301 ] 00:13:32.301 } 00:13:32.301 ] 00:13:32.301 } 00:13:32.562 [2024-12-13 18:08:06.679900] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:32.562 [2024-12-13 18:08:06.680047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83415 ] 00:13:32.562 [2024-12-13 18:08:06.824669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.562 [2024-12-13 18:08:06.853178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.823 Running I/O for 5 seconds... 00:13:34.695 33586.00 IOPS, 131.20 MiB/s [2024-12-13T18:08:10.007Z] 35835.50 IOPS, 139.98 MiB/s [2024-12-13T18:08:11.380Z] 36469.33 IOPS, 142.46 MiB/s [2024-12-13T18:08:12.327Z] 37257.50 IOPS, 145.54 MiB/s [2024-12-13T18:08:12.327Z] 37812.00 IOPS, 147.70 MiB/s 00:13:37.950 Latency(us) 00:13:37.950 [2024-12-13T18:08:12.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.950 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:37.950 xnvme_bdev : 5.00 37800.77 147.66 0.00 0.00 1689.14 346.58 11241.94 00:13:37.950 [2024-12-13T18:08:12.327Z] =================================================================================================================== 00:13:37.950 [2024-12-13T18:08:12.327Z] Total : 37800.77 147.66 0.00 0.00 1689.14 346.58 11241.94 00:13:37.950 18:08:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.950 18:08:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:37.950 18:08:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.950 18:08:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.950 18:08:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.950 { 00:13:37.950 "subsystems": [ 00:13:37.950 { 00:13:37.950 "subsystem": "bdev", 00:13:37.950 "config": [ 00:13:37.950 { 00:13:37.950 "params": { 00:13:37.950 "io_mechanism": "io_uring", 00:13:37.950 "conserve_cpu": true, 00:13:37.950 "filename": "/dev/nvme0n1", 00:13:37.950 "name": "xnvme_bdev" 00:13:37.950 }, 00:13:37.950 "method": "bdev_xnvme_create" 00:13:37.950 }, 00:13:37.950 { 00:13:37.950 "method": "bdev_wait_for_examine" 00:13:37.950 } 00:13:37.950 ] 00:13:37.950 } 00:13:37.950 ] 00:13:37.950 } 00:13:37.950 [2024-12-13 18:08:12.158569] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:37.950 [2024-12-13 18:08:12.158676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83480 ] 00:13:37.950 [2024-12-13 18:08:12.303162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.950 [2024-12-13 18:08:12.322408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.212 Running I/O for 5 seconds... 00:13:40.098 27468.00 IOPS, 107.30 MiB/s [2024-12-13T18:08:15.419Z] 26443.00 IOPS, 103.29 MiB/s [2024-12-13T18:08:16.807Z] 23508.33 IOPS, 91.83 MiB/s [2024-12-13T18:08:17.747Z] 22211.50 IOPS, 86.76 MiB/s [2024-12-13T18:08:17.747Z] 21514.20 IOPS, 84.04 MiB/s 00:13:43.370 Latency(us) 00:13:43.370 [2024-12-13T18:08:17.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.370 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:43.370 xnvme_bdev : 5.01 21492.04 83.95 0.00 0.00 2970.78 47.66 21979.77 00:13:43.370 [2024-12-13T18:08:17.747Z] =================================================================================================================== 00:13:43.370 [2024-12-13T18:08:17.747Z] Total : 21492.04 83.95 0.00 0.00 2970.78 47.66 21979.77 00:13:43.370 00:13:43.370 real 0m10.926s 00:13:43.370 user 0m6.970s 00:13:43.370 sys 0m2.863s 00:13:43.370 18:08:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.370 18:08:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.370 ************************************ 00:13:43.370 END TEST xnvme_bdevperf 00:13:43.370 ************************************ 00:13:43.370 18:08:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:43.370 18:08:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.370 18:08:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.370 18:08:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.370 ************************************ 00:13:43.370 START TEST xnvme_fio_plugin 00:13:43.370 ************************************ 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.370 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.371 18:08:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.371 { 00:13:43.371 "subsystems": [ 00:13:43.371 { 00:13:43.371 "subsystem": "bdev", 00:13:43.371 "config": [ 00:13:43.371 { 00:13:43.371 "params": { 00:13:43.371 "io_mechanism": "io_uring", 00:13:43.371 "conserve_cpu": true, 00:13:43.371 "filename": "/dev/nvme0n1", 00:13:43.371 "name": "xnvme_bdev" 00:13:43.371 }, 00:13:43.371 "method": "bdev_xnvme_create" 00:13:43.371 }, 00:13:43.371 { 00:13:43.371 "method": "bdev_wait_for_examine" 00:13:43.371 } 00:13:43.371 ] 00:13:43.371 } 00:13:43.371 ] 00:13:43.371 } 00:13:43.630 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:43.630 fio-3.35 00:13:43.630 Starting 1 thread 00:13:48.905 00:13:48.905 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83584: Fri Dec 13 18:08:23 2024 00:13:48.905 read: IOPS=41.7k, BW=163MiB/s (171MB/s)(815MiB/5002msec) 00:13:48.905 slat (usec): min=2, max=133, avg= 3.66, stdev= 1.96 00:13:48.905 clat (usec): min=600, max=6875, avg=1390.44, stdev=365.51 00:13:48.905 lat (usec): min=603, max=6878, avg=1394.10, stdev=365.88 00:13:48.905 clat percentiles (usec): 00:13:48.905 | 1.00th=[ 709], 5.00th=[ 799], 10.00th=[ 889], 20.00th=[ 1090], 00:13:48.905 | 30.00th=[ 1221], 40.00th=[ 1303], 50.00th=[ 1385], 60.00th=[ 1467], 00:13:48.905 | 70.00th=[ 1549], 80.00th=[ 1647], 90.00th=[ 1827], 95.00th=[ 1991], 00:13:48.905 | 99.00th=[ 2376], 99.50th=[ 2540], 99.90th=[ 3326], 99.95th=[ 3752], 00:13:48.905 | 99.99th=[ 5211] 00:13:48.905 bw ( KiB/s): min=153600, max=212080, per=100.00%, avg=169712.00, stdev=18506.85, samples=9 00:13:48.905 iops : min=38400, max=53020, avg=42428.00, stdev=4626.71, samples=9 00:13:48.905 lat (usec) : 750=2.55%, 1000=12.59% 00:13:48.905 lat (msec) : 2=80.22%, 4=4.61%, 10=0.04% 00:13:48.905 cpu : usr=56.91%, sys=39.57%, ctx=12, majf=0, minf=1063 00:13:48.905 IO depths : 1=1.4%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:48.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.905 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:48.905 issued rwts: total=208661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:48.905 00:13:48.905 Run status group 0 (all jobs): 00:13:48.905 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=815MiB (855MB), run=5002-5002msec 00:13:49.166 ----------------------------------------------------- 00:13:49.166 Suppressions used: 00:13:49.166 count bytes template 00:13:49.166 1 11 /usr/src/fio/parse.c 00:13:49.166 1 8 libtcmalloc_minimal.so 00:13:49.166 1 904 libcrypto.so 00:13:49.166 ----------------------------------------------------- 00:13:49.166 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:49.428 18:08:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.428 { 00:13:49.428 "subsystems": [ 00:13:49.428 { 00:13:49.428 "subsystem": "bdev", 00:13:49.428 "config": [ 00:13:49.428 { 00:13:49.428 "params": { 00:13:49.428 "io_mechanism": "io_uring", 00:13:49.428 "conserve_cpu": true, 00:13:49.428 "filename": "/dev/nvme0n1", 00:13:49.428 "name": "xnvme_bdev" 00:13:49.428 }, 00:13:49.428 "method": "bdev_xnvme_create" 00:13:49.428 }, 00:13:49.428 { 00:13:49.428 "method": "bdev_wait_for_examine" 00:13:49.428 } 00:13:49.428 ] 00:13:49.428 } 00:13:49.428 ] 00:13:49.428 } 00:13:49.428 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:49.429 fio-3.35 00:13:49.429 Starting 1 thread 00:13:54.827 00:13:54.827 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83669: Fri Dec 13 18:08:29 2024 00:13:54.827 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(664MiB/5001msec); 0 zone resets 00:13:54.827 slat (usec): min=2, max=202, avg= 4.07, stdev= 2.41 00:13:54.827 clat (usec): min=160, max=10186, avg=1718.89, stdev=302.53 00:13:54.827 lat (usec): min=164, max=10189, avg=1722.96, stdev=302.97 00:13:54.827 clat percentiles (usec): 00:13:54.827 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1483], 00:13:54.827 | 30.00th=[ 1549], 40.00th=[ 1614], 50.00th=[ 1680], 60.00th=[ 1762], 00:13:54.827 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2089], 95.00th=[ 2245], 00:13:54.827 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 2933], 99.95th=[ 3195], 00:13:54.827 | 99.99th=[ 9241] 00:13:54.827 bw ( KiB/s): min=132312, max=141800, per=100.00%, avg=135966.22, stdev=3197.87, samples=9 00:13:54.827 iops : min=33078, max=35450, avg=33991.56, stdev=799.47, samples=9 00:13:54.827 lat (usec) : 250=0.01%, 500=0.01%, 1000=0.03% 00:13:54.827 lat (msec) : 2=84.74%, 4=15.19%, 10=0.03%, 20=0.01% 00:13:54.827 cpu : usr=50.72%, sys=44.86%, ctx=8, majf=0, minf=1064 00:13:54.828 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:54.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.828 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:54.828 issued rwts: total=0,169919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.828 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:54.828 00:13:54.828 Run status group 0 (all jobs): 00:13:54.828 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=664MiB (696MB), run=5001-5001msec 00:13:55.405 ----------------------------------------------------- 00:13:55.405 Suppressions used: 00:13:55.405 count bytes template 00:13:55.405 1 11 /usr/src/fio/parse.c 00:13:55.405 1 8 libtcmalloc_minimal.so 00:13:55.405 1 904 libcrypto.so 00:13:55.405 ----------------------------------------------------- 00:13:55.405 00:13:55.405 00:13:55.405 real 0m11.950s 00:13:55.405 user 0m6.551s 00:13:55.405 sys 0m4.702s 00:13:55.405 ************************************ 00:13:55.405 END TEST xnvme_fio_plugin 00:13:55.405 ************************************ 00:13:55.405 18:08:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.405 18:08:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:55.405 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:55.406 18:08:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:55.406 18:08:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:55.406 18:08:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.406 18:08:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:55.406 ************************************ 00:13:55.406 START TEST xnvme_rpc 00:13:55.406 ************************************ 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83745 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83745 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83745 ']' 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.406 18:08:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:55.406 [2024-12-13 18:08:29.704853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:55.406 [2024-12-13 18:08:29.705005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83745 ] 00:13:55.667 [2024-12-13 18:08:29.854116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.667 [2024-12-13 18:08:29.882488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.239 xnvme_bdev 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:56.239 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83745 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83745 ']' 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83745 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83745 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.500 killing process with pid 83745 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83745' 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83745 00:13:56.500 18:08:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83745 00:13:56.762 00:13:56.762 real 0m1.444s 00:13:56.762 user 0m1.507s 00:13:56.762 sys 0m0.425s 00:13:56.762 18:08:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.762 ************************************ 00:13:56.762 END TEST xnvme_rpc 00:13:56.762 ************************************ 00:13:56.762 18:08:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.762 18:08:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:56.762 18:08:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.762 18:08:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.762 18:08:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.024 ************************************ 00:13:57.024 START TEST xnvme_bdevperf 00:13:57.024 ************************************ 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:57.024 18:08:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.024 { 00:13:57.024 "subsystems": [ 00:13:57.024 { 00:13:57.024 "subsystem": "bdev", 00:13:57.024 "config": [ 00:13:57.024 { 00:13:57.024 "params": { 00:13:57.024 "io_mechanism": "io_uring_cmd", 00:13:57.024 "conserve_cpu": false, 00:13:57.024 "filename": "/dev/ng0n1", 00:13:57.024 "name": "xnvme_bdev" 00:13:57.024 }, 00:13:57.024 "method": "bdev_xnvme_create" 00:13:57.024 }, 00:13:57.024 { 00:13:57.024 "method": "bdev_wait_for_examine" 00:13:57.024 } 00:13:57.024 ] 00:13:57.024 } 00:13:57.024 ] 00:13:57.024 } 00:13:57.024 [2024-12-13 18:08:31.207484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:13:57.024 [2024-12-13 18:08:31.207618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83803 ] 00:13:57.024 [2024-12-13 18:08:31.358283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.024 [2024-12-13 18:08:31.387053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.287 Running I/O for 5 seconds... 00:13:59.175 32397.00 IOPS, 126.55 MiB/s [2024-12-13T18:08:34.495Z] 33986.50 IOPS, 132.76 MiB/s [2024-12-13T18:08:35.882Z] 35227.67 IOPS, 137.61 MiB/s [2024-12-13T18:08:36.827Z] 35945.25 IOPS, 140.41 MiB/s 00:14:02.450 Latency(us) 00:14:02.450 [2024-12-13T18:08:36.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.450 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:02.450 xnvme_bdev : 5.00 37724.57 147.36 0.00 0.00 1692.56 330.83 17442.66 00:14:02.450 [2024-12-13T18:08:36.827Z] =================================================================================================================== 00:14:02.450 [2024-12-13T18:08:36.827Z] Total : 37724.57 147.36 0.00 0.00 1692.56 330.83 17442.66 00:14:02.450 18:08:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.450 18:08:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:02.450 18:08:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:02.450 18:08:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:02.450 18:08:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.450 { 00:14:02.450 "subsystems": [ 00:14:02.450 { 00:14:02.450 "subsystem": "bdev", 00:14:02.450 "config": [ 00:14:02.450 { 00:14:02.450 "params": { 00:14:02.450 "io_mechanism": "io_uring_cmd", 00:14:02.450 "conserve_cpu": false, 00:14:02.450 "filename": "/dev/ng0n1", 00:14:02.450 "name": "xnvme_bdev" 00:14:02.450 }, 00:14:02.450 "method": "bdev_xnvme_create" 00:14:02.450 }, 00:14:02.450 { 00:14:02.450 "method": "bdev_wait_for_examine" 00:14:02.450 } 00:14:02.450 ] 00:14:02.450 } 00:14:02.450 ] 00:14:02.450 } 00:14:02.450 [2024-12-13 18:08:36.723394] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:02.450 [2024-12-13 18:08:36.723535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83866 ] 00:14:02.712 [2024-12-13 18:08:36.870855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.712 [2024-12-13 18:08:36.899156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.712 Running I/O for 5 seconds... 00:14:05.045 44767.00 IOPS, 174.87 MiB/s [2024-12-13T18:08:40.367Z] 45082.00 IOPS, 176.10 MiB/s [2024-12-13T18:08:41.312Z] 45676.00 IOPS, 178.42 MiB/s [2024-12-13T18:08:42.256Z] 42842.75 IOPS, 167.35 MiB/s [2024-12-13T18:08:42.256Z] 39115.60 IOPS, 152.80 MiB/s 00:14:07.879 Latency(us) 00:14:07.879 [2024-12-13T18:08:42.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.879 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:07.879 xnvme_bdev : 5.01 39078.72 152.65 0.00 0.00 1633.29 92.95 29844.09 00:14:07.879 [2024-12-13T18:08:42.256Z] =================================================================================================================== 00:14:07.879 [2024-12-13T18:08:42.256Z] Total : 39078.72 152.65 0.00 0.00 1633.29 92.95 29844.09 00:14:07.879 18:08:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.879 18:08:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:07.879 18:08:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.879 18:08:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.879 18:08:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.879 { 00:14:07.879 "subsystems": [ 00:14:07.880 { 00:14:07.880 "subsystem": "bdev", 00:14:07.880 "config": [ 00:14:07.880 { 00:14:07.880 "params": { 00:14:07.880 "io_mechanism": "io_uring_cmd", 00:14:07.880 "conserve_cpu": false, 00:14:07.880 "filename": "/dev/ng0n1", 00:14:07.880 "name": "xnvme_bdev" 00:14:07.880 }, 00:14:07.880 "method": "bdev_xnvme_create" 00:14:07.880 }, 00:14:07.880 { 00:14:07.880 "method": "bdev_wait_for_examine" 00:14:07.880 } 00:14:07.880 ] 00:14:07.880 } 00:14:07.880 ] 00:14:07.880 } 00:14:07.880 [2024-12-13 18:08:42.252076] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:07.880 [2024-12-13 18:08:42.252277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83929 ] 00:14:08.141 [2024-12-13 18:08:42.398657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.141 [2024-12-13 18:08:42.426927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.402 Running I/O for 5 seconds... 00:14:10.291 71424.00 IOPS, 279.00 MiB/s [2024-12-13T18:08:45.619Z] 73952.00 IOPS, 288.88 MiB/s [2024-12-13T18:08:46.565Z] 73834.67 IOPS, 288.42 MiB/s [2024-12-13T18:08:47.955Z] 75104.00 IOPS, 293.38 MiB/s [2024-12-13T18:08:47.955Z] 78643.20 IOPS, 307.20 MiB/s 00:14:13.578 Latency(us) 00:14:13.578 [2024-12-13T18:08:47.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.578 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:13.578 xnvme_bdev : 5.00 78611.57 307.08 0.00 0.00 810.69 441.11 2533.22 00:14:13.578 [2024-12-13T18:08:47.955Z] =================================================================================================================== 00:14:13.578 [2024-12-13T18:08:47.955Z] Total : 78611.57 307.08 0.00 0.00 810.69 441.11 2533.22 00:14:13.578 18:08:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.578 18:08:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:13.578 18:08:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.578 18:08:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.578 18:08:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.578 { 00:14:13.578 "subsystems": [ 00:14:13.578 { 00:14:13.578 "subsystem": "bdev", 00:14:13.578 "config": [ 00:14:13.578 { 00:14:13.578 "params": { 00:14:13.578 "io_mechanism": "io_uring_cmd", 00:14:13.578 "conserve_cpu": false, 00:14:13.578 "filename": "/dev/ng0n1", 00:14:13.578 "name": "xnvme_bdev" 00:14:13.578 }, 00:14:13.578 "method": "bdev_xnvme_create" 00:14:13.578 }, 00:14:13.578 { 00:14:13.578 "method": "bdev_wait_for_examine" 00:14:13.578 } 00:14:13.578 ] 00:14:13.578 } 00:14:13.578 ] 00:14:13.578 } 00:14:13.578 [2024-12-13 18:08:47.714177] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:13.578 [2024-12-13 18:08:47.714313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84000 ] 00:14:13.578 [2024-12-13 18:08:47.857941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.578 [2024-12-13 18:08:47.877780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.578 Running I/O for 5 seconds... 00:14:15.909 379.00 IOPS, 1.48 MiB/s [2024-12-13T18:08:51.228Z] 313.00 IOPS, 1.22 MiB/s [2024-12-13T18:08:52.172Z] 327.67 IOPS, 1.28 MiB/s [2024-12-13T18:08:53.118Z] 326.25 IOPS, 1.27 MiB/s [2024-12-13T18:08:53.380Z] 757.40 IOPS, 2.96 MiB/s 00:14:19.003 Latency(us) 00:14:19.003 [2024-12-13T18:08:53.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.003 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:19.003 xnvme_bdev : 5.31 725.63 2.83 0.00 0.00 85673.39 79.56 642051.15 00:14:19.003 [2024-12-13T18:08:53.380Z] =================================================================================================================== 00:14:19.003 [2024-12-13T18:08:53.380Z] Total : 725.63 2.83 0.00 0.00 85673.39 79.56 642051.15 00:14:19.265 00:14:19.265 real 0m22.291s 00:14:19.265 user 0m11.910s 00:14:19.265 sys 0m9.946s 00:14:19.265 18:08:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.265 ************************************ 00:14:19.265 END TEST xnvme_bdevperf 00:14:19.265 18:08:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.265 ************************************ 00:14:19.265 18:08:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:19.265 18:08:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:19.265 18:08:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.265 18:08:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.265 ************************************ 00:14:19.265 START TEST xnvme_fio_plugin 00:14:19.265 ************************************ 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:19.265 18:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.265 { 00:14:19.265 "subsystems": [ 00:14:19.265 { 00:14:19.265 "subsystem": "bdev", 00:14:19.265 "config": [ 00:14:19.265 { 00:14:19.265 "params": { 00:14:19.265 "io_mechanism": "io_uring_cmd", 00:14:19.265 "conserve_cpu": false, 00:14:19.265 "filename": "/dev/ng0n1", 00:14:19.265 "name": "xnvme_bdev" 00:14:19.265 }, 00:14:19.265 "method": "bdev_xnvme_create" 00:14:19.265 }, 00:14:19.265 { 00:14:19.265 "method": "bdev_wait_for_examine" 00:14:19.265 } 00:14:19.265 ] 00:14:19.265 } 00:14:19.265 ] 00:14:19.265 } 00:14:19.526 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:19.526 fio-3.35 00:14:19.526 Starting 1 thread 00:14:24.831 00:14:24.831 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84107: Fri Dec 13 18:08:59 2024 00:14:24.831 read: IOPS=43.5k, BW=170MiB/s (178MB/s)(849MiB/5001msec) 00:14:24.831 slat (usec): min=2, max=140, avg= 3.26, stdev= 1.36 00:14:24.831 clat (usec): min=317, max=3108, avg=1344.37, stdev=236.27 00:14:24.831 lat (usec): min=320, max=3131, avg=1347.63, stdev=236.46 00:14:24.831 clat percentiles (usec): 00:14:24.831 | 1.00th=[ 979], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:14:24.831 | 30.00th=[ 1205], 40.00th=[ 1254], 50.00th=[ 1287], 60.00th=[ 1336], 00:14:24.831 | 70.00th=[ 1401], 80.00th=[ 1500], 90.00th=[ 1680], 95.00th=[ 1827], 00:14:24.831 | 99.00th=[ 2114], 99.50th=[ 2245], 99.90th=[ 2474], 99.95th=[ 2573], 00:14:24.831 | 99.99th=[ 2933] 00:14:24.831 bw ( KiB/s): min=166912, max=181760, per=100.00%, avg=175104.00, stdev=6079.66, samples=9 00:14:24.831 iops : min=41728, max=45440, avg=43776.00, stdev=1519.92, samples=9 00:14:24.831 lat (usec) : 500=0.01%, 1000=1.57% 00:14:24.831 lat (msec) : 2=96.43%, 4=1.99% 00:14:24.831 cpu : usr=41.90%, sys=57.00%, ctx=45, majf=0, minf=1063 00:14:24.831 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:24.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.831 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:24.831 issued rwts: total=217426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:24.831 00:14:24.831 Run status group 0 (all jobs): 00:14:24.831 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=849MiB (891MB), run=5001-5001msec 00:14:25.092 ----------------------------------------------------- 00:14:25.092 Suppressions used: 00:14:25.092 count bytes template 00:14:25.092 1 11 /usr/src/fio/parse.c 00:14:25.092 1 8 libtcmalloc_minimal.so 00:14:25.092 1 904 libcrypto.so 00:14:25.092 ----------------------------------------------------- 00:14:25.092 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:25.092 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.353 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:25.353 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:25.353 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:25.353 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:25.353 18:08:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.353 { 00:14:25.353 "subsystems": [ 00:14:25.353 { 00:14:25.353 "subsystem": "bdev", 00:14:25.353 "config": [ 00:14:25.353 { 00:14:25.353 "params": { 00:14:25.353 "io_mechanism": "io_uring_cmd", 00:14:25.353 "conserve_cpu": false, 00:14:25.353 "filename": "/dev/ng0n1", 00:14:25.353 "name": "xnvme_bdev" 00:14:25.353 }, 00:14:25.353 "method": "bdev_xnvme_create" 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "method": "bdev_wait_for_examine" 00:14:25.353 } 00:14:25.353 ] 00:14:25.353 } 00:14:25.353 ] 00:14:25.353 } 00:14:25.353 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:25.353 fio-3.35 00:14:25.353 Starting 1 thread 00:14:31.945 00:14:31.945 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84187: Fri Dec 13 18:09:05 2024 00:14:31.945 write: IOPS=35.3k, BW=138MiB/s (144MB/s)(689MiB/5001msec); 0 zone resets 00:14:31.945 slat (usec): min=2, max=581, avg= 4.27, stdev= 3.13 00:14:31.945 clat (usec): min=135, max=4860, avg=1644.38, stdev=304.39 00:14:31.945 lat (usec): min=139, max=4890, avg=1648.66, stdev=305.03 00:14:31.945 clat percentiles (usec): 00:14:31.945 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1401], 00:14:31.945 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1680], 00:14:31.945 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2180], 00:14:31.945 | 99.00th=[ 2507], 99.50th=[ 2769], 99.90th=[ 3359], 99.95th=[ 3687], 00:14:31.945 | 99.99th=[ 4621] 00:14:31.945 bw ( KiB/s): min=129528, max=160080, per=100.00%, avg=141685.33, stdev=8407.08, samples=9 00:14:31.945 iops : min=32382, max=40020, avg=35421.33, stdev=2101.77, samples=9 00:14:31.945 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.09% 00:14:31.945 lat (msec) : 2=88.46%, 4=11.38%, 10=0.03% 00:14:31.945 cpu : usr=37.58%, sys=60.52%, ctx=31, majf=0, minf=1064 00:14:31.945 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:14:31.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.945 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:31.945 issued rwts: total=0,176311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:31.945 00:14:31.945 Run status group 0 (all jobs): 00:14:31.946 WRITE: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=689MiB (722MB), run=5001-5001msec 00:14:31.946 ----------------------------------------------------- 00:14:31.946 Suppressions used: 00:14:31.946 count bytes template 00:14:31.946 1 11 /usr/src/fio/parse.c 00:14:31.946 1 8 libtcmalloc_minimal.so 00:14:31.946 1 904 libcrypto.so 00:14:31.946 ----------------------------------------------------- 00:14:31.946 00:14:31.946 00:14:31.946 real 0m11.957s 00:14:31.946 user 0m5.089s 00:14:31.946 sys 0m6.399s 00:14:31.946 18:09:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.946 18:09:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:31.946 ************************************ 00:14:31.946 END TEST xnvme_fio_plugin 00:14:31.946 ************************************ 00:14:31.946 18:09:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:31.946 18:09:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:31.946 18:09:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:31.946 18:09:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:31.946 18:09:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:31.946 18:09:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.946 18:09:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.946 ************************************ 00:14:31.946 START TEST xnvme_rpc 00:14:31.946 ************************************ 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84261 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84261 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84261 ']' 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.946 18:09:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.946 [2024-12-13 18:09:05.618336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:31.946 [2024-12-13 18:09:05.618496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84261 ] 00:14:31.946 [2024-12-13 18:09:05.765205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.946 [2024-12-13 18:09:05.794123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.208 xnvme_bdev 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.208 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84261 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84261 ']' 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84261 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84261 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84261' 00:14:32.470 killing process with pid 84261 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84261 00:14:32.470 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84261 00:14:32.731 00:14:32.731 real 0m1.405s 00:14:32.731 user 0m1.511s 00:14:32.731 sys 0m0.372s 00:14:32.731 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.731 ************************************ 00:14:32.731 END TEST xnvme_rpc 00:14:32.731 ************************************ 00:14:32.731 18:09:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.731 18:09:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:32.731 18:09:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.731 18:09:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.731 18:09:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.731 ************************************ 00:14:32.731 START TEST xnvme_bdevperf 00:14:32.731 ************************************ 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.731 18:09:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.731 { 00:14:32.731 "subsystems": [ 00:14:32.731 { 00:14:32.731 "subsystem": "bdev", 00:14:32.731 "config": [ 00:14:32.731 { 00:14:32.731 "params": { 00:14:32.731 "io_mechanism": "io_uring_cmd", 00:14:32.731 "conserve_cpu": true, 00:14:32.731 "filename": "/dev/ng0n1", 00:14:32.731 "name": "xnvme_bdev" 00:14:32.731 }, 00:14:32.731 "method": "bdev_xnvme_create" 00:14:32.731 }, 00:14:32.731 { 00:14:32.731 "method": "bdev_wait_for_examine" 00:14:32.731 } 00:14:32.731 ] 00:14:32.731 } 00:14:32.731 ] 00:14:32.731 } 00:14:32.731 [2024-12-13 18:09:07.069475] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:32.731 [2024-12-13 18:09:07.069606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84324 ] 00:14:32.992 [2024-12-13 18:09:07.216923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.992 [2024-12-13 18:09:07.245695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.992 Running I/O for 5 seconds... 00:14:35.324 33920.00 IOPS, 132.50 MiB/s [2024-12-13T18:09:10.709Z] 33696.00 IOPS, 131.62 MiB/s [2024-12-13T18:09:11.653Z] 33856.00 IOPS, 132.25 MiB/s [2024-12-13T18:09:12.597Z] 33920.00 IOPS, 132.50 MiB/s 00:14:38.220 Latency(us) 00:14:38.220 [2024-12-13T18:09:12.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.220 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:38.220 xnvme_bdev : 5.00 34046.87 133.00 0.00 0.00 1875.42 875.91 4335.46 00:14:38.220 [2024-12-13T18:09:12.597Z] =================================================================================================================== 00:14:38.220 [2024-12-13T18:09:12.597Z] Total : 34046.87 133.00 0.00 0.00 1875.42 875.91 4335.46 00:14:38.220 18:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.220 18:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:38.220 18:09:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.220 18:09:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.220 18:09:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.220 { 00:14:38.220 "subsystems": [ 00:14:38.220 { 00:14:38.220 "subsystem": "bdev", 00:14:38.220 "config": [ 00:14:38.220 { 00:14:38.220 "params": { 00:14:38.220 "io_mechanism": "io_uring_cmd", 00:14:38.220 "conserve_cpu": true, 00:14:38.220 "filename": "/dev/ng0n1", 00:14:38.220 "name": "xnvme_bdev" 00:14:38.220 }, 00:14:38.220 "method": "bdev_xnvme_create" 00:14:38.220 }, 00:14:38.220 { 00:14:38.220 "method": "bdev_wait_for_examine" 00:14:38.220 } 00:14:38.220 ] 00:14:38.220 } 00:14:38.220 ] 00:14:38.220 } 00:14:38.220 [2024-12-13 18:09:12.592570] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:38.220 [2024-12-13 18:09:12.592959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84387 ] 00:14:38.481 [2024-12-13 18:09:12.742602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.481 [2024-12-13 18:09:12.771151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.742 Running I/O for 5 seconds... 00:14:40.629 36477.00 IOPS, 142.49 MiB/s [2024-12-13T18:09:15.946Z] 35708.50 IOPS, 139.49 MiB/s [2024-12-13T18:09:16.888Z] 35581.33 IOPS, 138.99 MiB/s [2024-12-13T18:09:18.271Z] 35619.50 IOPS, 139.14 MiB/s 00:14:43.894 Latency(us) 00:14:43.894 [2024-12-13T18:09:18.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.894 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:43.894 xnvme_bdev : 5.00 36109.66 141.05 0.00 0.00 1767.71 765.64 8721.33 00:14:43.894 [2024-12-13T18:09:18.271Z] =================================================================================================================== 00:14:43.894 [2024-12-13T18:09:18.271Z] Total : 36109.66 141.05 0.00 0.00 1767.71 765.64 8721.33 00:14:43.894 18:09:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.894 18:09:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:43.894 18:09:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.894 18:09:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.894 18:09:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.894 { 00:14:43.894 "subsystems": [ 00:14:43.894 { 00:14:43.894 "subsystem": "bdev", 00:14:43.894 "config": [ 00:14:43.894 { 00:14:43.894 "params": { 00:14:43.894 "io_mechanism": "io_uring_cmd", 00:14:43.894 "conserve_cpu": true, 00:14:43.894 "filename": "/dev/ng0n1", 00:14:43.894 "name": "xnvme_bdev" 00:14:43.894 }, 00:14:43.894 "method": "bdev_xnvme_create" 00:14:43.894 }, 00:14:43.894 { 00:14:43.894 "method": "bdev_wait_for_examine" 00:14:43.894 } 00:14:43.894 ] 00:14:43.894 } 00:14:43.894 ] 00:14:43.894 } 00:14:43.894 [2024-12-13 18:09:18.127503] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:43.894 [2024-12-13 18:09:18.127625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84450 ] 00:14:44.154 [2024-12-13 18:09:18.272484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.154 [2024-12-13 18:09:18.301772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.154 Running I/O for 5 seconds... 00:14:46.044 79808.00 IOPS, 311.75 MiB/s [2024-12-13T18:09:21.809Z] 79744.00 IOPS, 311.50 MiB/s [2024-12-13T18:09:22.753Z] 79317.33 IOPS, 309.83 MiB/s [2024-12-13T18:09:23.697Z] 77664.00 IOPS, 303.38 MiB/s 00:14:49.320 Latency(us) 00:14:49.320 [2024-12-13T18:09:23.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.320 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:49.320 xnvme_bdev : 5.00 76718.32 299.68 0.00 0.00 830.56 441.11 6326.74 00:14:49.320 [2024-12-13T18:09:23.697Z] =================================================================================================================== 00:14:49.320 [2024-12-13T18:09:23.697Z] Total : 76718.32 299.68 0.00 0.00 830.56 441.11 6326.74 00:14:49.320 18:09:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.320 18:09:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:49.320 18:09:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.320 18:09:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.320 18:09:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.320 { 00:14:49.320 "subsystems": [ 00:14:49.320 { 00:14:49.320 "subsystem": "bdev", 00:14:49.320 "config": [ 00:14:49.320 { 00:14:49.320 "params": { 00:14:49.320 "io_mechanism": "io_uring_cmd", 00:14:49.320 "conserve_cpu": true, 00:14:49.320 "filename": "/dev/ng0n1", 00:14:49.320 "name": "xnvme_bdev" 00:14:49.320 }, 00:14:49.320 "method": "bdev_xnvme_create" 00:14:49.320 }, 00:14:49.320 { 00:14:49.320 "method": "bdev_wait_for_examine" 00:14:49.320 } 00:14:49.320 ] 00:14:49.320 } 00:14:49.320 ] 00:14:49.320 } 00:14:49.581 [2024-12-13 18:09:23.728502] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:49.581 [2024-12-13 18:09:23.728650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84519 ] 00:14:49.581 [2024-12-13 18:09:23.878053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.581 [2024-12-13 18:09:23.917531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.842 Running I/O for 5 seconds... 00:14:51.728 800.00 IOPS, 3.12 MiB/s [2024-12-13T18:09:27.483Z] 4846.50 IOPS, 18.93 MiB/s [2024-12-13T18:09:28.423Z] 17125.67 IOPS, 66.90 MiB/s [2024-12-13T18:09:29.362Z] 22989.50 IOPS, 89.80 MiB/s [2024-12-13T18:09:29.362Z] 26463.00 IOPS, 103.37 MiB/s 00:14:54.985 Latency(us) 00:14:54.985 [2024-12-13T18:09:29.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.985 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:54.985 xnvme_bdev : 5.00 26453.22 103.33 0.00 0.00 2413.83 133.91 224233.94 00:14:54.985 [2024-12-13T18:09:29.362Z] =================================================================================================================== 00:14:54.985 [2024-12-13T18:09:29.362Z] Total : 26453.22 103.33 0.00 0.00 2413.83 133.91 224233.94 00:14:54.985 00:14:54.985 real 0m22.316s 00:14:54.985 user 0m14.069s 00:14:54.985 sys 0m6.422s 00:14:54.985 18:09:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.985 18:09:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.985 ************************************ 00:14:54.985 END TEST xnvme_bdevperf 00:14:54.985 ************************************ 00:14:55.246 18:09:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:55.246 18:09:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:55.246 18:09:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.246 18:09:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.246 ************************************ 00:14:55.246 START TEST xnvme_fio_plugin 00:14:55.246 ************************************ 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:55.246 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:55.247 18:09:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.247 { 00:14:55.247 "subsystems": [ 00:14:55.247 { 00:14:55.247 "subsystem": "bdev", 00:14:55.247 "config": [ 00:14:55.247 { 00:14:55.247 "params": { 00:14:55.247 "io_mechanism": "io_uring_cmd", 00:14:55.247 "conserve_cpu": true, 00:14:55.247 "filename": "/dev/ng0n1", 00:14:55.247 "name": "xnvme_bdev" 00:14:55.247 }, 00:14:55.247 "method": "bdev_xnvme_create" 00:14:55.247 }, 00:14:55.247 { 00:14:55.247 "method": "bdev_wait_for_examine" 00:14:55.247 } 00:14:55.247 ] 00:14:55.247 } 00:14:55.247 ] 00:14:55.247 } 00:14:55.247 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:55.247 fio-3.35 00:14:55.247 Starting 1 thread 00:15:01.897 00:15:01.897 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84627: Fri Dec 13 18:09:35 2024 00:15:01.897 read: IOPS=35.4k, BW=138MiB/s (145MB/s)(691MiB/5001msec) 00:15:01.897 slat (nsec): min=2920, max=65268, avg=3823.74, stdev=1988.49 00:15:01.897 clat (usec): min=924, max=3466, avg=1654.68, stdev=284.26 00:15:01.897 lat (usec): min=928, max=3499, avg=1658.50, stdev=284.67 00:15:01.897 clat percentiles (usec): 00:15:01.897 | 1.00th=[ 1090], 5.00th=[ 1237], 10.00th=[ 1319], 20.00th=[ 1418], 00:15:01.897 | 30.00th=[ 1500], 40.00th=[ 1565], 50.00th=[ 1614], 60.00th=[ 1696], 00:15:01.897 | 70.00th=[ 1778], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2180], 00:15:01.897 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2835], 99.95th=[ 2966], 00:15:01.897 | 99.99th=[ 3294] 00:15:01.897 bw ( KiB/s): min=136192, max=157184, per=100.00%, avg=142165.33, stdev=6312.36, samples=9 00:15:01.897 iops : min=34048, max=39296, avg=35541.33, stdev=1578.09, samples=9 00:15:01.897 lat (usec) : 1000=0.13% 00:15:01.897 lat (msec) : 2=88.26%, 4=11.61% 00:15:01.897 cpu : usr=54.44%, sys=42.26%, ctx=8, majf=0, minf=1063 00:15:01.897 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.897 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:01.897 issued rwts: total=177012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.897 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:01.897 00:15:01.897 Run status group 0 (all jobs): 00:15:01.897 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=691MiB (725MB), run=5001-5001msec 00:15:01.897 ----------------------------------------------------- 00:15:01.897 Suppressions used: 00:15:01.897 count bytes template 00:15:01.897 1 11 /usr/src/fio/parse.c 00:15:01.897 1 8 libtcmalloc_minimal.so 00:15:01.897 1 904 libcrypto.so 00:15:01.897 ----------------------------------------------------- 00:15:01.897 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.897 18:09:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.897 { 00:15:01.897 "subsystems": [ 00:15:01.897 { 00:15:01.897 "subsystem": "bdev", 00:15:01.897 "config": [ 00:15:01.897 { 00:15:01.897 "params": { 00:15:01.897 "io_mechanism": "io_uring_cmd", 00:15:01.897 "conserve_cpu": true, 00:15:01.897 "filename": "/dev/ng0n1", 00:15:01.897 "name": "xnvme_bdev" 00:15:01.897 }, 00:15:01.897 "method": "bdev_xnvme_create" 00:15:01.897 }, 00:15:01.897 { 00:15:01.897 "method": "bdev_wait_for_examine" 00:15:01.897 } 00:15:01.897 ] 00:15:01.897 } 00:15:01.897 ] 00:15:01.897 } 00:15:01.897 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.897 fio-3.35 00:15:01.897 Starting 1 thread 00:15:07.185 00:15:07.185 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84708: Fri Dec 13 18:09:41 2024 00:15:07.185 write: IOPS=37.4k, BW=146MiB/s (153MB/s)(730MiB/5002msec); 0 zone resets 00:15:07.185 slat (usec): min=2, max=223, avg= 4.09, stdev= 2.37 00:15:07.185 clat (usec): min=420, max=4784, avg=1548.15, stdev=294.47 00:15:07.185 lat (usec): min=424, max=4790, avg=1552.24, stdev=295.13 00:15:07.185 clat percentiles (usec): 00:15:07.185 | 1.00th=[ 1037], 5.00th=[ 1123], 10.00th=[ 1188], 20.00th=[ 1287], 00:15:07.185 | 30.00th=[ 1369], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1598], 00:15:07.185 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2073], 00:15:07.185 | 99.00th=[ 2376], 99.50th=[ 2540], 99.90th=[ 2900], 99.95th=[ 3097], 00:15:07.185 | 99.99th=[ 3523] 00:15:07.185 bw ( KiB/s): min=137240, max=174512, per=100.00%, avg=150557.33, stdev=13311.08, samples=9 00:15:07.185 iops : min=34310, max=43628, avg=37639.33, stdev=3327.77, samples=9 00:15:07.185 lat (usec) : 500=0.01%, 1000=0.33% 00:15:07.185 lat (msec) : 2=92.59%, 4=7.07%, 10=0.01% 00:15:07.185 cpu : usr=57.11%, sys=38.83%, ctx=10, majf=0, minf=1064 00:15:07.185 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:15:07.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.185 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:07.185 issued rwts: total=0,186975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.185 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:07.185 00:15:07.185 Run status group 0 (all jobs): 00:15:07.185 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=730MiB (766MB), run=5002-5002msec 00:15:07.446 ----------------------------------------------------- 00:15:07.446 Suppressions used: 00:15:07.446 count bytes template 00:15:07.446 1 11 /usr/src/fio/parse.c 00:15:07.446 1 8 libtcmalloc_minimal.so 00:15:07.446 1 904 libcrypto.so 00:15:07.446 ----------------------------------------------------- 00:15:07.446 00:15:07.446 00:15:07.446 real 0m12.236s 00:15:07.446 user 0m6.849s 00:15:07.446 sys 0m4.700s 00:15:07.446 18:09:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.446 ************************************ 00:15:07.446 END TEST xnvme_fio_plugin 00:15:07.446 ************************************ 00:15:07.446 18:09:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:07.446 18:09:41 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84261 00:15:07.446 18:09:41 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84261 ']' 00:15:07.446 18:09:41 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84261 00:15:07.446 Process with pid 84261 is not found 00:15:07.447 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84261) - No such process 00:15:07.447 18:09:41 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84261 is not found' 00:15:07.447 ************************************ 00:15:07.447 END TEST nvme_xnvme 00:15:07.447 ************************************ 00:15:07.447 00:15:07.447 real 2m57.413s 00:15:07.447 user 1m31.264s 00:15:07.447 sys 1m12.119s 00:15:07.447 18:09:41 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.447 18:09:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.447 18:09:41 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:07.447 18:09:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.447 18:09:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.447 18:09:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.447 ************************************ 00:15:07.447 START TEST blockdev_xnvme 00:15:07.447 ************************************ 00:15:07.447 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:07.447 * Looking for test storage... 00:15:07.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:07.709 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:07.709 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:07.709 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:07.709 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.709 18:09:41 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:07.709 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.709 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:07.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.709 --rc genhtml_branch_coverage=1 00:15:07.709 --rc genhtml_function_coverage=1 00:15:07.709 --rc genhtml_legend=1 00:15:07.709 --rc geninfo_all_blocks=1 00:15:07.709 --rc geninfo_unexecuted_blocks=1 00:15:07.709 00:15:07.709 ' 00:15:07.709 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:07.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.709 --rc genhtml_branch_coverage=1 00:15:07.709 --rc genhtml_function_coverage=1 00:15:07.709 --rc genhtml_legend=1 00:15:07.710 --rc geninfo_all_blocks=1 00:15:07.710 --rc geninfo_unexecuted_blocks=1 00:15:07.710 00:15:07.710 ' 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:07.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.710 --rc genhtml_branch_coverage=1 00:15:07.710 --rc genhtml_function_coverage=1 00:15:07.710 --rc genhtml_legend=1 00:15:07.710 --rc geninfo_all_blocks=1 00:15:07.710 --rc geninfo_unexecuted_blocks=1 00:15:07.710 00:15:07.710 ' 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:07.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.710 --rc genhtml_branch_coverage=1 00:15:07.710 --rc genhtml_function_coverage=1 00:15:07.710 --rc genhtml_legend=1 00:15:07.710 --rc geninfo_all_blocks=1 00:15:07.710 --rc geninfo_unexecuted_blocks=1 00:15:07.710 00:15:07.710 ' 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84837 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 84837 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 84837 ']' 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.710 18:09:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.710 18:09:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.710 [2024-12-13 18:09:42.016155] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:07.710 [2024-12-13 18:09:42.016601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84837 ] 00:15:07.972 [2024-12-13 18:09:42.164709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.972 [2024-12-13 18:09:42.205143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.545 18:09:42 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.545 18:09:42 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:08.545 18:09:42 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:08.545 18:09:42 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:08.545 18:09:42 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:08.545 18:09:42 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:08.545 18:09:42 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:09.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:09.692 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:09.692 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:09.692 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:09.692 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.692 18:09:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:09.692 18:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.692 nvme0n1 00:15:09.692 nvme0n2 00:15:09.692 nvme0n3 00:15:09.692 nvme1n1 00:15:09.692 nvme2n1 00:15:09.692 nvme3n1 00:15:09.692 18:09:44 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.692 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:09.692 18:09:44 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.692 18:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.692 18:09:44 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.692 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:09.954 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:09.954 18:09:44 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.954 18:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.954 18:09:44 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.954 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:09.954 18:09:44 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.954 18:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "93d1ee6e-9be1-4fec-af06-e56d23520e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "93d1ee6e-9be1-4fec-af06-e56d23520e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "fa8d736b-91e7-4330-a7c7-0442df6dedd4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fa8d736b-91e7-4330-a7c7-0442df6dedd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "615a31f4-3623-44cb-9ad0-b1e41802f562"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "615a31f4-3623-44cb-9ad0-b1e41802f562",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "53de134c-4d49-43c0-84f2-ebace22bfc04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "53de134c-4d49-43c0-84f2-ebace22bfc04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0ce16a37-d913-4dae-aa54-6e3eb6221d5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0ce16a37-d913-4dae-aa54-6e3eb6221d5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37b5a0bd-5a04-46ae-9887-eaae27f9de2b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37b5a0bd-5a04-46ae-9887-eaae27f9de2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:09.955 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 84837 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84837 ']' 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 84837 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84837 00:15:09.955 killing process with pid 84837 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84837' 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 84837 00:15:09.955 18:09:44 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 84837 00:15:10.528 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:10.528 18:09:44 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:10.528 18:09:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:10.528 18:09:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.528 18:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.528 ************************************ 00:15:10.528 START TEST bdev_hello_world 00:15:10.528 ************************************ 00:15:10.528 18:09:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:10.528 [2024-12-13 18:09:44.812208] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:10.529 [2024-12-13 18:09:44.812378] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85104 ] 00:15:10.790 [2024-12-13 18:09:44.958256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.790 [2024-12-13 18:09:44.998836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.051 [2024-12-13 18:09:45.263137] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:11.051 [2024-12-13 18:09:45.263466] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:11.051 [2024-12-13 18:09:45.263529] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:11.051 [2024-12-13 18:09:45.266032] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:11.051 [2024-12-13 18:09:45.267071] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:11.051 [2024-12-13 18:09:45.267124] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:11.051 [2024-12-13 18:09:45.267592] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:11.051 00:15:11.051 [2024-12-13 18:09:45.267627] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:11.312 00:15:11.312 real 0m0.788s 00:15:11.312 user 0m0.404s 00:15:11.312 sys 0m0.236s 00:15:11.312 ************************************ 00:15:11.312 END TEST bdev_hello_world 00:15:11.312 ************************************ 00:15:11.312 18:09:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.312 18:09:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:11.312 18:09:45 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:11.312 18:09:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:11.312 18:09:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.312 18:09:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.312 ************************************ 00:15:11.312 START TEST bdev_bounds 00:15:11.312 ************************************ 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85135 00:15:11.312 Process bdevio pid: 85135 00:15:11.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85135' 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85135 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85135 ']' 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.312 18:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:11.312 [2024-12-13 18:09:45.668471] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:11.312 [2024-12-13 18:09:45.668815] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85135 ] 00:15:11.573 [2024-12-13 18:09:45.815896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.573 [2024-12-13 18:09:45.859081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.573 [2024-12-13 18:09:45.859401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.573 [2024-12-13 18:09:45.859506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.519 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.519 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:12.519 18:09:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:12.519 I/O targets: 00:15:12.519 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:12.519 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:12.519 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:12.519 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:12.519 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:12.519 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:12.519 00:15:12.519 00:15:12.519 CUnit - A unit testing framework for C - Version 2.1-3 00:15:12.519 http://cunit.sourceforge.net/ 00:15:12.519 00:15:12.519 00:15:12.519 Suite: bdevio tests on: nvme3n1 00:15:12.519 Test: blockdev write read block ...passed 00:15:12.519 Test: blockdev write zeroes read block ...passed 00:15:12.519 Test: blockdev write zeroes read no split ...passed 00:15:12.519 Test: blockdev write zeroes read split ...passed 00:15:12.519 Test: blockdev write zeroes read split partial ...passed 00:15:12.519 Test: blockdev reset ...passed 00:15:12.519 Test: blockdev write read 8 blocks ...passed 00:15:12.519 Test: blockdev write read size > 128k ...passed 00:15:12.519 Test: blockdev write read invalid size ...passed 00:15:12.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.519 Test: blockdev write read max offset ...passed 00:15:12.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.519 Test: blockdev writev readv 8 blocks ...passed 00:15:12.519 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.519 Test: blockdev writev readv block ...passed 00:15:12.519 Test: blockdev writev readv size > 128k ...passed 00:15:12.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.519 Test: blockdev comparev and writev ...passed 00:15:12.519 Test: blockdev nvme passthru rw ...passed 00:15:12.519 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.519 Test: blockdev nvme admin passthru ...passed 00:15:12.519 Test: blockdev copy ...passed 00:15:12.519 Suite: bdevio tests on: nvme2n1 00:15:12.519 Test: blockdev write read block ...passed 00:15:12.519 Test: blockdev write zeroes read block ...passed 00:15:12.519 Test: blockdev write zeroes read no split ...passed 00:15:12.519 Test: blockdev write zeroes read split ...passed 00:15:12.519 Test: blockdev write zeroes read split partial ...passed 00:15:12.519 Test: blockdev reset ...passed 00:15:12.519 Test: blockdev write read 8 blocks ...passed 00:15:12.519 Test: blockdev write read size > 128k ...passed 00:15:12.519 Test: blockdev write read invalid size ...passed 00:15:12.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.519 Test: blockdev write read max offset ...passed 00:15:12.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.519 Test: blockdev writev readv 8 blocks ...passed 00:15:12.519 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.519 Test: blockdev writev readv block ...passed 00:15:12.519 Test: blockdev writev readv size > 128k ...passed 00:15:12.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.519 Test: blockdev comparev and writev ...passed 00:15:12.519 Test: blockdev nvme passthru rw ...passed 00:15:12.519 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.519 Test: blockdev nvme admin passthru ...passed 00:15:12.519 Test: blockdev copy ...passed 00:15:12.519 Suite: bdevio tests on: nvme1n1 00:15:12.519 Test: blockdev write read block ...passed 00:15:12.519 Test: blockdev write zeroes read block ...passed 00:15:12.519 Test: blockdev write zeroes read no split ...passed 00:15:12.519 Test: blockdev write zeroes read split ...passed 00:15:12.519 Test: blockdev write zeroes read split partial ...passed 00:15:12.519 Test: blockdev reset ...passed 00:15:12.519 Test: blockdev write read 8 blocks ...passed 00:15:12.519 Test: blockdev write read size > 128k ...passed 00:15:12.519 Test: blockdev write read invalid size ...passed 00:15:12.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.519 Test: blockdev write read max offset ...passed 00:15:12.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.519 Test: blockdev writev readv 8 blocks ...passed 00:15:12.519 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.519 Test: blockdev writev readv block ...passed 00:15:12.519 Test: blockdev writev readv size > 128k ...passed 00:15:12.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.519 Test: blockdev comparev and writev ...passed 00:15:12.519 Test: blockdev nvme passthru rw ...passed 00:15:12.519 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.519 Test: blockdev nvme admin passthru ...passed 00:15:12.519 Test: blockdev copy ...passed 00:15:12.519 Suite: bdevio tests on: nvme0n3 00:15:12.519 Test: blockdev write read block ...passed 00:15:12.519 Test: blockdev write zeroes read block ...passed 00:15:12.519 Test: blockdev write zeroes read no split ...passed 00:15:12.519 Test: blockdev write zeroes read split ...passed 00:15:12.519 Test: blockdev write zeroes read split partial ...passed 00:15:12.519 Test: blockdev reset ...passed 00:15:12.519 Test: blockdev write read 8 blocks ...passed 00:15:12.520 Test: blockdev write read size > 128k ...passed 00:15:12.520 Test: blockdev write read invalid size ...passed 00:15:12.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.520 Test: blockdev write read max offset ...passed 00:15:12.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.520 Test: blockdev writev readv 8 blocks ...passed 00:15:12.520 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.520 Test: blockdev writev readv block ...passed 00:15:12.520 Test: blockdev writev readv size > 128k ...passed 00:15:12.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.520 Test: blockdev comparev and writev ...passed 00:15:12.520 Test: blockdev nvme passthru rw ...passed 00:15:12.520 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.520 Test: blockdev nvme admin passthru ...passed 00:15:12.520 Test: blockdev copy ...passed 00:15:12.520 Suite: bdevio tests on: nvme0n2 00:15:12.520 Test: blockdev write read block ...passed 00:15:12.520 Test: blockdev write zeroes read block ...passed 00:15:12.520 Test: blockdev write zeroes read no split ...passed 00:15:12.520 Test: blockdev write zeroes read split ...passed 00:15:12.520 Test: blockdev write zeroes read split partial ...passed 00:15:12.520 Test: blockdev reset ...passed 00:15:12.520 Test: blockdev write read 8 blocks ...passed 00:15:12.520 Test: blockdev write read size > 128k ...passed 00:15:12.520 Test: blockdev write read invalid size ...passed 00:15:12.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.520 Test: blockdev write read max offset ...passed 00:15:12.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.520 Test: blockdev writev readv 8 blocks ...passed 00:15:12.520 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.520 Test: blockdev writev readv block ...passed 00:15:12.520 Test: blockdev writev readv size > 128k ...passed 00:15:12.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.520 Test: blockdev comparev and writev ...passed 00:15:12.520 Test: blockdev nvme passthru rw ...passed 00:15:12.520 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.520 Test: blockdev nvme admin passthru ...passed 00:15:12.520 Test: blockdev copy ...passed 00:15:12.520 Suite: bdevio tests on: nvme0n1 00:15:12.520 Test: blockdev write read block ...passed 00:15:12.520 Test: blockdev write zeroes read block ...passed 00:15:12.520 Test: blockdev write zeroes read no split ...passed 00:15:12.520 Test: blockdev write zeroes read split ...passed 00:15:12.520 Test: blockdev write zeroes read split partial ...passed 00:15:12.520 Test: blockdev reset ...passed 00:15:12.520 Test: blockdev write read 8 blocks ...passed 00:15:12.520 Test: blockdev write read size > 128k ...passed 00:15:12.520 Test: blockdev write read invalid size ...passed 00:15:12.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.520 Test: blockdev write read max offset ...passed 00:15:12.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.520 Test: blockdev writev readv 8 blocks ...passed 00:15:12.520 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.520 Test: blockdev writev readv block ...passed 00:15:12.520 Test: blockdev writev readv size > 128k ...passed 00:15:12.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.781 Test: blockdev comparev and writev ...passed 00:15:12.781 Test: blockdev nvme passthru rw ...passed 00:15:12.781 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.781 Test: blockdev nvme admin passthru ...passed 00:15:12.781 Test: blockdev copy ...passed 00:15:12.781 00:15:12.781 Run Summary: Type Total Ran Passed Failed Inactive 00:15:12.781 suites 6 6 n/a 0 0 00:15:12.781 tests 138 138 138 0 0 00:15:12.781 asserts 780 780 780 0 n/a 00:15:12.781 00:15:12.781 Elapsed time = 0.610 seconds 00:15:12.781 0 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85135 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85135 ']' 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85135 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85135 00:15:12.781 killing process with pid 85135 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85135' 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85135 00:15:12.781 18:09:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85135 00:15:13.043 18:09:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:13.043 00:15:13.043 real 0m1.619s 00:15:13.043 user 0m3.903s 00:15:13.043 sys 0m0.383s 00:15:13.043 18:09:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.043 ************************************ 00:15:13.043 END TEST bdev_bounds 00:15:13.043 ************************************ 00:15:13.043 18:09:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:13.043 18:09:47 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:13.043 18:09:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:13.043 18:09:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.043 18:09:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.043 ************************************ 00:15:13.043 START TEST bdev_nbd 00:15:13.043 ************************************ 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85186 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85186 /var/tmp/spdk-nbd.sock 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85186 ']' 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:13.043 18:09:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:13.043 [2024-12-13 18:09:47.361917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:13.043 [2024-12-13 18:09:47.362074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.305 [2024-12-13 18:09:47.511013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.305 [2024-12-13 18:09:47.549545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:13.878 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.140 1+0 records in 00:15:14.140 1+0 records out 00:15:14.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106431 s, 3.8 MB/s 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:14.140 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:14.401 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:14.401 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:14.401 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:14.401 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:14.401 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.401 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.402 1+0 records in 00:15:14.402 1+0 records out 00:15:14.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108839 s, 3.8 MB/s 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:14.402 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:14.662 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:14.662 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:14.662 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.663 1+0 records in 00:15:14.663 1+0 records out 00:15:14.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104594 s, 3.9 MB/s 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:14.663 18:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.924 1+0 records in 00:15:14.924 1+0 records out 00:15:14.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127647 s, 3.2 MB/s 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:14.924 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.185 1+0 records in 00:15:15.185 1+0 records out 00:15:15.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011837 s, 3.5 MB/s 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.185 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.447 1+0 records in 00:15:15.447 1+0 records out 00:15:15.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000999704 s, 4.1 MB/s 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.447 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:15.708 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:15.708 { 00:15:15.708 "nbd_device": "/dev/nbd0", 00:15:15.708 "bdev_name": "nvme0n1" 00:15:15.708 }, 00:15:15.708 { 00:15:15.708 "nbd_device": "/dev/nbd1", 00:15:15.708 "bdev_name": "nvme0n2" 00:15:15.708 }, 00:15:15.708 { 00:15:15.709 "nbd_device": "/dev/nbd2", 00:15:15.709 "bdev_name": "nvme0n3" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd3", 00:15:15.709 "bdev_name": "nvme1n1" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd4", 00:15:15.709 "bdev_name": "nvme2n1" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd5", 00:15:15.709 "bdev_name": "nvme3n1" 00:15:15.709 } 00:15:15.709 ]' 00:15:15.709 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:15.709 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:15.709 18:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd0", 00:15:15.709 "bdev_name": "nvme0n1" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd1", 00:15:15.709 "bdev_name": "nvme0n2" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd2", 00:15:15.709 "bdev_name": "nvme0n3" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd3", 00:15:15.709 "bdev_name": "nvme1n1" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd4", 00:15:15.709 "bdev_name": "nvme2n1" 00:15:15.709 }, 00:15:15.709 { 00:15:15.709 "nbd_device": "/dev/nbd5", 00:15:15.709 "bdev_name": "nvme3n1" 00:15:15.709 } 00:15:15.709 ]' 00:15:15.709 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:15.709 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.709 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:15.709 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.709 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:15.709 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.709 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.970 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.232 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.493 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:16.754 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:16.754 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:16.754 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:16.755 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.755 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.755 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:16.755 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.755 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.755 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.755 18:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.015 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:17.274 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:17.533 /dev/nbd0 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.533 1+0 records in 00:15:17.533 1+0 records out 00:15:17.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848231 s, 4.8 MB/s 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:17.533 18:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:17.792 /dev/nbd1 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.792 1+0 records in 00:15:17.792 1+0 records out 00:15:17.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083745 s, 4.9 MB/s 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:17.792 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:18.052 /dev/nbd10 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.052 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.052 1+0 records in 00:15:18.052 1+0 records out 00:15:18.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775685 s, 5.3 MB/s 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.053 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:18.314 /dev/nbd11 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.314 1+0 records in 00:15:18.314 1+0 records out 00:15:18.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135551 s, 3.0 MB/s 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.314 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.315 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.315 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.315 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:18.576 /dev/nbd12 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.576 1+0 records in 00:15:18.576 1+0 records out 00:15:18.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105579 s, 3.9 MB/s 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.576 18:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:18.838 /dev/nbd13 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.838 1+0 records in 00:15:18.838 1+0 records out 00:15:18.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00142199 s, 2.9 MB/s 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.838 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd0", 00:15:19.100 "bdev_name": "nvme0n1" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd1", 00:15:19.100 "bdev_name": "nvme0n2" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd10", 00:15:19.100 "bdev_name": "nvme0n3" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd11", 00:15:19.100 "bdev_name": "nvme1n1" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd12", 00:15:19.100 "bdev_name": "nvme2n1" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd13", 00:15:19.100 "bdev_name": "nvme3n1" 00:15:19.100 } 00:15:19.100 ]' 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd0", 00:15:19.100 "bdev_name": "nvme0n1" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd1", 00:15:19.100 "bdev_name": "nvme0n2" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd10", 00:15:19.100 "bdev_name": "nvme0n3" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd11", 00:15:19.100 "bdev_name": "nvme1n1" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd12", 00:15:19.100 "bdev_name": "nvme2n1" 00:15:19.100 }, 00:15:19.100 { 00:15:19.100 "nbd_device": "/dev/nbd13", 00:15:19.100 "bdev_name": "nvme3n1" 00:15:19.100 } 00:15:19.100 ]' 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:19.100 /dev/nbd1 00:15:19.100 /dev/nbd10 00:15:19.100 /dev/nbd11 00:15:19.100 /dev/nbd12 00:15:19.100 /dev/nbd13' 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:19.100 /dev/nbd1 00:15:19.100 /dev/nbd10 00:15:19.100 /dev/nbd11 00:15:19.100 /dev/nbd12 00:15:19.100 /dev/nbd13' 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:19.100 256+0 records in 00:15:19.100 256+0 records out 00:15:19.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681837 s, 154 MB/s 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.100 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:19.362 256+0 records in 00:15:19.362 256+0 records out 00:15:19.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235654 s, 4.4 MB/s 00:15:19.362 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.362 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:19.624 256+0 records in 00:15:19.624 256+0 records out 00:15:19.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238327 s, 4.4 MB/s 00:15:19.624 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.624 18:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:19.886 256+0 records in 00:15:19.886 256+0 records out 00:15:19.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238334 s, 4.4 MB/s 00:15:19.886 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.886 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:20.147 256+0 records in 00:15:20.147 256+0 records out 00:15:20.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.303587 s, 3.5 MB/s 00:15:20.147 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.147 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:20.407 256+0 records in 00:15:20.407 256+0 records out 00:15:20.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243997 s, 4.3 MB/s 00:15:20.407 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.407 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:20.667 256+0 records in 00:15:20.667 256+0 records out 00:15:20.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239179 s, 4.4 MB/s 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.668 18:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.927 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.184 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.185 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.443 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:21.701 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:21.701 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:21.701 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:21.701 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.701 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.702 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:21.702 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.702 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.702 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.702 18:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.960 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:22.218 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:22.218 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:22.218 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:22.218 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:22.218 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:22.218 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:22.219 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:22.477 malloc_lvol_verify 00:15:22.477 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:22.477 e103eeb5-de83-4809-95ec-78a28a71757e 00:15:22.477 18:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:22.735 d40ea731-7da1-43e7-90e5-e281be1687cf 00:15:22.735 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:22.994 /dev/nbd0 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:22.994 mke2fs 1.47.0 (5-Feb-2023) 00:15:22.994 Discarding device blocks: 0/4096 done 00:15:22.994 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:22.994 00:15:22.994 Allocating group tables: 0/1 done 00:15:22.994 Writing inode tables: 0/1 done 00:15:22.994 Creating journal (1024 blocks): done 00:15:22.994 Writing superblocks and filesystem accounting information: 0/1 done 00:15:22.994 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.994 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85186 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85186 ']' 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85186 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85186 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.254 killing process with pid 85186 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85186' 00:15:23.254 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85186 00:15:23.255 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85186 00:15:23.516 18:09:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:23.516 00:15:23.516 real 0m10.354s 00:15:23.516 user 0m13.973s 00:15:23.516 sys 0m3.803s 00:15:23.516 ************************************ 00:15:23.516 END TEST bdev_nbd 00:15:23.516 ************************************ 00:15:23.516 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.516 18:09:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:23.516 18:09:57 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:23.516 18:09:57 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:23.516 18:09:57 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:23.516 18:09:57 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:23.516 18:09:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.516 18:09:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.516 18:09:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.516 ************************************ 00:15:23.517 START TEST bdev_fio 00:15:23.517 ************************************ 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:23.517 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:23.517 ************************************ 00:15:23.517 START TEST bdev_fio_rw_verify 00:15:23.517 ************************************ 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:23.517 18:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.777 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.777 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.777 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.777 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.777 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.777 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.777 fio-3.35 00:15:23.777 Starting 6 threads 00:15:36.113 00:15:36.113 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85587: Fri Dec 13 18:10:08 2024 00:15:36.113 read: IOPS=13.7k, BW=53.3MiB/s (55.9MB/s)(534MiB/10003msec) 00:15:36.113 slat (usec): min=2, max=2354, avg= 6.90, stdev=18.33 00:15:36.113 clat (usec): min=70, max=7068, avg=1432.36, stdev=770.42 00:15:36.113 lat (usec): min=74, max=7099, avg=1439.26, stdev=771.16 00:15:36.113 clat percentiles (usec): 00:15:36.113 | 50.000th=[ 1336], 99.000th=[ 3818], 99.900th=[ 5473], 99.990th=[ 6783], 00:15:36.113 | 99.999th=[ 7046] 00:15:36.114 write: IOPS=14.1k, BW=55.0MiB/s (57.7MB/s)(550MiB/10003msec); 0 zone resets 00:15:36.114 slat (usec): min=13, max=4228, avg=42.12, stdev=143.07 00:15:36.114 clat (usec): min=108, max=8598, avg=1688.43, stdev=841.92 00:15:36.114 lat (usec): min=122, max=8615, avg=1730.55, stdev=854.54 00:15:36.114 clat percentiles (usec): 00:15:36.114 | 50.000th=[ 1565], 99.000th=[ 4293], 99.900th=[ 5932], 99.990th=[ 7701], 00:15:36.114 | 99.999th=[ 8160] 00:15:36.114 bw ( KiB/s): min=48788, max=71600, per=100.00%, avg=56614.00, stdev=1219.59, samples=114 00:15:36.114 iops : min=12194, max=17899, avg=14152.63, stdev=304.91, samples=114 00:15:36.114 lat (usec) : 100=0.01%, 250=1.25%, 500=4.89%, 750=8.02%, 1000=11.10% 00:15:36.114 lat (msec) : 2=50.20%, 4=23.36%, 10=1.17% 00:15:36.114 cpu : usr=43.89%, sys=31.81%, ctx=4648, majf=0, minf=16023 00:15:36.114 IO depths : 1=11.3%, 2=23.7%, 4=51.2%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.114 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.114 issued rwts: total=136618,140912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.114 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:36.114 00:15:36.114 Run status group 0 (all jobs): 00:15:36.114 READ: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=534MiB (560MB), run=10003-10003msec 00:15:36.114 WRITE: bw=55.0MiB/s (57.7MB/s), 55.0MiB/s-55.0MiB/s (57.7MB/s-57.7MB/s), io=550MiB (577MB), run=10003-10003msec 00:15:36.114 ----------------------------------------------------- 00:15:36.114 Suppressions used: 00:15:36.114 count bytes template 00:15:36.114 6 48 /usr/src/fio/parse.c 00:15:36.114 4200 403200 /usr/src/fio/iolog.c 00:15:36.114 1 8 libtcmalloc_minimal.so 00:15:36.114 1 904 libcrypto.so 00:15:36.114 ----------------------------------------------------- 00:15:36.114 00:15:36.114 00:15:36.114 real 0m11.200s 00:15:36.114 user 0m27.087s 00:15:36.114 sys 0m19.407s 00:15:36.114 ************************************ 00:15:36.114 END TEST bdev_fio_rw_verify 00:15:36.114 ************************************ 00:15:36.114 18:10:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.114 18:10:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "93d1ee6e-9be1-4fec-af06-e56d23520e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "93d1ee6e-9be1-4fec-af06-e56d23520e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "fa8d736b-91e7-4330-a7c7-0442df6dedd4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fa8d736b-91e7-4330-a7c7-0442df6dedd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "615a31f4-3623-44cb-9ad0-b1e41802f562"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "615a31f4-3623-44cb-9ad0-b1e41802f562",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "53de134c-4d49-43c0-84f2-ebace22bfc04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "53de134c-4d49-43c0-84f2-ebace22bfc04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0ce16a37-d913-4dae-aa54-6e3eb6221d5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0ce16a37-d913-4dae-aa54-6e3eb6221d5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37b5a0bd-5a04-46ae-9887-eaae27f9de2b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37b5a0bd-5a04-46ae-9887-eaae27f9de2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:36.114 /home/vagrant/spdk_repo/spdk 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:36.114 18:10:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:36.114 00:15:36.114 real 0m11.358s 00:15:36.114 user 0m27.157s 00:15:36.115 sys 0m19.481s 00:15:36.115 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.115 18:10:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:36.115 ************************************ 00:15:36.115 END TEST bdev_fio 00:15:36.115 ************************************ 00:15:36.115 18:10:09 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:36.115 18:10:09 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:36.115 18:10:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:36.115 18:10:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.115 18:10:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.115 ************************************ 00:15:36.115 START TEST bdev_verify 00:15:36.115 ************************************ 00:15:36.115 18:10:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:36.115 [2024-12-13 18:10:09.179505] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:36.115 [2024-12-13 18:10:09.179637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85751 ] 00:15:36.115 [2024-12-13 18:10:09.327817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:36.115 [2024-12-13 18:10:09.369654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.115 [2024-12-13 18:10:09.369738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.115 Running I/O for 5 seconds... 00:15:37.634 24064.00 IOPS, 94.00 MiB/s [2024-12-13T18:10:12.952Z] 24160.00 IOPS, 94.38 MiB/s [2024-12-13T18:10:14.339Z] 23829.33 IOPS, 93.08 MiB/s [2024-12-13T18:10:14.912Z] 23872.00 IOPS, 93.25 MiB/s [2024-12-13T18:10:14.912Z] 23712.00 IOPS, 92.63 MiB/s 00:15:40.535 Latency(us) 00:15:40.535 [2024-12-13T18:10:14.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.535 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.535 Verification LBA range: start 0x0 length 0x80000 00:15:40.535 nvme0n1 : 5.04 1931.82 7.55 0.00 0.00 66144.18 10536.17 64124.46 00:15:40.535 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.535 Verification LBA range: start 0x80000 length 0x80000 00:15:40.535 nvme0n1 : 5.07 1792.84 7.00 0.00 0.00 71266.19 11090.71 67754.14 00:15:40.535 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.535 Verification LBA range: start 0x0 length 0x80000 00:15:40.536 nvme0n2 : 5.04 1931.23 7.54 0.00 0.00 66064.31 6553.60 60898.07 00:15:40.536 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0x80000 length 0x80000 00:15:40.536 nvme0n2 : 5.06 1797.34 7.02 0.00 0.00 70941.52 3579.27 65334.35 00:15:40.536 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0x0 length 0x80000 00:15:40.536 nvme0n3 : 5.04 1930.66 7.54 0.00 0.00 65978.44 10737.82 61301.37 00:15:40.536 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0x80000 length 0x80000 00:15:40.536 nvme0n3 : 5.06 1796.81 7.02 0.00 0.00 70808.87 8116.38 66544.25 00:15:40.536 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0x0 length 0xbd0bd 00:15:40.536 nvme1n1 : 5.07 2525.68 9.87 0.00 0.00 50277.75 5973.86 54848.59 00:15:40.536 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:40.536 nvme1n1 : 5.08 2477.83 9.68 0.00 0.00 51170.41 6276.33 60898.07 00:15:40.536 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0x0 length 0x20000 00:15:40.536 nvme2n1 : 5.07 1993.36 7.79 0.00 0.00 63627.87 2848.30 60091.47 00:15:40.536 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0x20000 length 0x20000 00:15:40.536 nvme2n1 : 5.08 1864.07 7.28 0.00 0.00 67941.33 7360.20 67350.84 00:15:40.536 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0x0 length 0xa0000 00:15:40.536 nvme3n1 : 5.05 1823.25 7.12 0.00 0.00 69523.73 9427.10 89935.56 00:15:40.536 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.536 Verification LBA range: start 0xa0000 length 0xa0000 00:15:40.536 nvme3n1 : 5.08 1588.25 6.20 0.00 0.00 79570.22 7561.85 127442.31 00:15:40.536 [2024-12-13T18:10:14.913Z] =================================================================================================================== 00:15:40.536 [2024-12-13T18:10:14.913Z] Total : 23453.15 91.61 0.00 0.00 65046.69 2848.30 127442.31 00:15:40.797 00:15:40.797 real 0m5.989s 00:15:40.797 user 0m9.465s 00:15:40.797 sys 0m1.599s 00:15:40.797 18:10:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.797 ************************************ 00:15:40.797 END TEST bdev_verify 00:15:40.797 ************************************ 00:15:40.797 18:10:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:40.797 18:10:15 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:40.797 18:10:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:40.797 18:10:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.797 18:10:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.797 ************************************ 00:15:40.797 START TEST bdev_verify_big_io 00:15:40.797 ************************************ 00:15:40.797 18:10:15 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:41.059 [2024-12-13 18:10:15.240876] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:41.059 [2024-12-13 18:10:15.241013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85840 ] 00:15:41.059 [2024-12-13 18:10:15.388353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:41.059 [2024-12-13 18:10:15.428449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.059 [2024-12-13 18:10:15.428621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.632 Running I/O for 5 seconds... 00:15:47.757 1928.00 IOPS, 120.50 MiB/s [2024-12-13T18:10:22.704Z] 3011.50 IOPS, 188.22 MiB/s [2024-12-13T18:10:23.273Z] 3379.67 IOPS, 211.23 MiB/s 00:15:48.896 Latency(us) 00:15:48.896 [2024-12-13T18:10:23.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.896 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x0 length 0x8000 00:15:48.896 nvme0n1 : 5.81 145.87 9.12 0.00 0.00 857868.38 76223.41 884030.23 00:15:48.896 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x8000 length 0x8000 00:15:48.896 nvme0n1 : 6.08 81.55 5.10 0.00 0.00 1477920.19 11241.94 2181038.08 00:15:48.896 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x0 length 0x8000 00:15:48.896 nvme0n2 : 5.82 153.86 9.62 0.00 0.00 798439.82 6200.71 745295.56 00:15:48.896 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x8000 length 0x8000 00:15:48.896 nvme0n2 : 6.09 84.08 5.25 0.00 0.00 1339671.63 54445.29 1632552.17 00:15:48.896 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x0 length 0x8000 00:15:48.896 nvme0n3 : 5.82 121.06 7.57 0.00 0.00 981139.62 8116.38 2090699.22 00:15:48.896 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x8000 length 0x8000 00:15:48.896 nvme0n3 : 6.17 82.99 5.19 0.00 0.00 1275048.57 133895.09 1329271.73 00:15:48.896 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x0 length 0xbd0b 00:15:48.896 nvme1n1 : 5.80 198.48 12.40 0.00 0.00 581796.54 51622.20 590428.95 00:15:48.896 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:48.896 nvme1n1 : 6.29 123.80 7.74 0.00 0.00 803404.35 2873.50 942105.21 00:15:48.896 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x0 length 0x2000 00:15:48.896 nvme2n1 : 5.81 129.40 8.09 0.00 0.00 873126.69 12905.55 2193943.63 00:15:48.896 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x2000 length 0x2000 00:15:48.896 nvme2n1 : 6.61 152.57 9.54 0.00 0.00 621094.13 16736.89 3407065.40 00:15:48.896 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0x0 length 0xa000 00:15:48.896 nvme3n1 : 5.82 153.94 9.62 0.00 0.00 713702.06 11191.53 767880.27 00:15:48.896 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:48.896 Verification LBA range: start 0xa000 length 0xa000 00:15:48.896 nvme3n1 : 7.18 239.47 14.97 0.00 0.00 371908.12 523.03 3432876.50 00:15:48.896 [2024-12-13T18:10:23.273Z] =================================================================================================================== 00:15:48.896 [2024-12-13T18:10:23.273Z] Total : 1667.05 104.19 0.00 0.00 784747.12 523.03 3432876.50 00:15:48.896 00:15:48.896 real 0m7.978s 00:15:48.896 user 0m14.772s 00:15:48.896 sys 0m0.469s 00:15:48.896 18:10:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.896 ************************************ 00:15:48.896 END TEST bdev_verify_big_io 00:15:48.896 ************************************ 00:15:48.896 18:10:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.896 18:10:23 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:48.896 18:10:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:48.896 18:10:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.896 18:10:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.896 ************************************ 00:15:48.896 START TEST bdev_write_zeroes 00:15:48.896 ************************************ 00:15:48.896 18:10:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:49.155 [2024-12-13 18:10:23.272048] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:49.155 [2024-12-13 18:10:23.272179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85957 ] 00:15:49.155 [2024-12-13 18:10:23.412876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.155 [2024-12-13 18:10:23.437670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.414 Running I/O for 1 seconds... 00:15:50.354 79936.00 IOPS, 312.25 MiB/s 00:15:50.354 Latency(us) 00:15:50.354 [2024-12-13T18:10:24.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.354 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.354 nvme0n1 : 1.02 13067.38 51.04 0.00 0.00 9786.31 5671.38 22483.89 00:15:50.354 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.354 nvme0n2 : 1.02 13052.21 50.99 0.00 0.00 9791.63 5646.18 22786.36 00:15:50.354 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.354 nvme0n3 : 1.02 13037.30 50.93 0.00 0.00 9796.47 5620.97 23189.66 00:15:50.354 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.354 nvme1n1 : 1.03 13768.75 53.78 0.00 0.00 9267.85 4537.11 20669.05 00:15:50.354 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.354 nvme2n1 : 1.02 13146.26 51.35 0.00 0.00 9701.23 5268.09 20568.22 00:15:50.354 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:50.354 nvme3n1 : 1.02 13006.63 50.81 0.00 0.00 9761.28 3226.39 19257.50 00:15:50.354 [2024-12-13T18:10:24.731Z] =================================================================================================================== 00:15:50.354 [2024-12-13T18:10:24.731Z] Total : 79078.53 308.90 0.00 0.00 9679.87 3226.39 23189.66 00:15:50.615 00:15:50.615 real 0m1.726s 00:15:50.615 user 0m1.106s 00:15:50.615 sys 0m0.450s 00:15:50.615 ************************************ 00:15:50.615 END TEST bdev_write_zeroes 00:15:50.615 ************************************ 00:15:50.615 18:10:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.615 18:10:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 18:10:24 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:50.874 18:10:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:50.874 18:10:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.874 18:10:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 ************************************ 00:15:50.874 START TEST bdev_json_nonenclosed 00:15:50.874 ************************************ 00:15:50.874 18:10:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:50.874 [2024-12-13 18:10:25.076141] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:50.874 [2024-12-13 18:10:25.076304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85994 ] 00:15:50.874 [2024-12-13 18:10:25.224933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.134 [2024-12-13 18:10:25.263606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.134 [2024-12-13 18:10:25.263738] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:51.134 [2024-12-13 18:10:25.263763] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:51.134 [2024-12-13 18:10:25.263784] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.134 00:15:51.134 real 0m0.345s 00:15:51.134 user 0m0.134s 00:15:51.134 sys 0m0.106s 00:15:51.134 18:10:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.134 ************************************ 00:15:51.134 END TEST bdev_json_nonenclosed 00:15:51.134 ************************************ 00:15:51.134 18:10:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:51.134 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:51.134 18:10:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:51.134 18:10:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.134 18:10:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.134 ************************************ 00:15:51.134 START TEST bdev_json_nonarray 00:15:51.134 ************************************ 00:15:51.134 18:10:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:51.134 [2024-12-13 18:10:25.484047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:51.134 [2024-12-13 18:10:25.484187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86019 ] 00:15:51.395 [2024-12-13 18:10:25.631268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.395 [2024-12-13 18:10:25.667623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.395 [2024-12-13 18:10:25.667759] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:51.395 [2024-12-13 18:10:25.667777] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:51.395 [2024-12-13 18:10:25.667795] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.395 00:15:51.395 real 0m0.336s 00:15:51.395 user 0m0.127s 00:15:51.395 sys 0m0.104s 00:15:51.395 18:10:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.395 18:10:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:51.395 ************************************ 00:15:51.395 END TEST bdev_json_nonarray 00:15:51.395 ************************************ 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:51.656 18:10:25 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:56.436 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:57.008 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:57.008 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:57.008 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:57.008 00:15:57.008 real 0m49.487s 00:15:57.008 user 1m14.835s 00:15:57.008 sys 0m38.805s 00:15:57.008 ************************************ 00:15:57.008 END TEST blockdev_xnvme 00:15:57.008 ************************************ 00:15:57.008 18:10:31 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.008 18:10:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.008 18:10:31 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:57.008 18:10:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:57.008 18:10:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.008 18:10:31 -- common/autotest_common.sh@10 -- # set +x 00:15:57.008 ************************************ 00:15:57.008 START TEST ublk 00:15:57.008 ************************************ 00:15:57.008 18:10:31 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:57.008 * Looking for test storage... 00:15:57.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:57.008 18:10:31 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:57.008 18:10:31 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:57.008 18:10:31 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.270 18:10:31 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.270 18:10:31 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.270 18:10:31 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.270 18:10:31 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.270 18:10:31 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.270 18:10:31 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.270 18:10:31 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.270 18:10:31 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:57.270 18:10:31 ublk -- scripts/common.sh@345 -- # : 1 00:15:57.270 18:10:31 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.270 18:10:31 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.270 18:10:31 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:57.270 18:10:31 ublk -- scripts/common.sh@353 -- # local d=1 00:15:57.270 18:10:31 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.270 18:10:31 ublk -- scripts/common.sh@355 -- # echo 1 00:15:57.270 18:10:31 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.270 18:10:31 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@353 -- # local d=2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.270 18:10:31 ublk -- scripts/common.sh@355 -- # echo 2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.270 18:10:31 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.270 18:10:31 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.270 18:10:31 ublk -- scripts/common.sh@368 -- # return 0 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.270 --rc genhtml_branch_coverage=1 00:15:57.270 --rc genhtml_function_coverage=1 00:15:57.270 --rc genhtml_legend=1 00:15:57.270 --rc geninfo_all_blocks=1 00:15:57.270 --rc geninfo_unexecuted_blocks=1 00:15:57.270 00:15:57.270 ' 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.270 --rc genhtml_branch_coverage=1 00:15:57.270 --rc genhtml_function_coverage=1 00:15:57.270 --rc genhtml_legend=1 00:15:57.270 --rc geninfo_all_blocks=1 00:15:57.270 --rc geninfo_unexecuted_blocks=1 00:15:57.270 00:15:57.270 ' 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.270 --rc genhtml_branch_coverage=1 00:15:57.270 --rc genhtml_function_coverage=1 00:15:57.270 --rc genhtml_legend=1 00:15:57.270 --rc geninfo_all_blocks=1 00:15:57.270 --rc geninfo_unexecuted_blocks=1 00:15:57.270 00:15:57.270 ' 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.270 --rc genhtml_branch_coverage=1 00:15:57.270 --rc genhtml_function_coverage=1 00:15:57.270 --rc genhtml_legend=1 00:15:57.270 --rc geninfo_all_blocks=1 00:15:57.270 --rc geninfo_unexecuted_blocks=1 00:15:57.270 00:15:57.270 ' 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:57.270 18:10:31 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:57.270 18:10:31 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:57.270 18:10:31 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:57.270 18:10:31 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:57.270 18:10:31 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:57.270 18:10:31 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:57.270 18:10:31 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:57.270 18:10:31 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:57.270 18:10:31 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.270 18:10:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:57.270 ************************************ 00:15:57.270 START TEST test_save_ublk_config 00:15:57.270 ************************************ 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86314 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86314 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86314 ']' 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.270 18:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:57.270 [2024-12-13 18:10:31.563908] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:57.271 [2024-12-13 18:10:31.564625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86314 ] 00:15:57.532 [2024-12-13 18:10:31.712408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.532 [2024-12-13 18:10:31.753124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.104 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.104 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:58.104 18:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:58.104 18:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:58.104 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.104 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:58.104 [2024-12-13 18:10:32.426273] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:58.104 [2024-12-13 18:10:32.427417] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:58.104 malloc0 00:15:58.104 [2024-12-13 18:10:32.466400] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:58.104 [2024-12-13 18:10:32.466490] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:58.104 [2024-12-13 18:10:32.466500] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:58.104 [2024-12-13 18:10:32.466514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:58.104 [2024-12-13 18:10:32.475423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:58.104 [2024-12-13 18:10:32.475469] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:58.366 [2024-12-13 18:10:32.481273] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:58.366 [2024-12-13 18:10:32.481416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:58.366 [2024-12-13 18:10:32.499274] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:58.366 0 00:15:58.366 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.366 18:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:58.366 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.366 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:58.627 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.627 18:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:58.627 "subsystems": [ 00:15:58.627 { 00:15:58.627 "subsystem": "fsdev", 00:15:58.627 "config": [ 00:15:58.627 { 00:15:58.627 "method": "fsdev_set_opts", 00:15:58.627 "params": { 00:15:58.627 "fsdev_io_pool_size": 65535, 00:15:58.627 "fsdev_io_cache_size": 256 00:15:58.627 } 00:15:58.627 } 00:15:58.627 ] 00:15:58.627 }, 00:15:58.627 { 00:15:58.627 "subsystem": "keyring", 00:15:58.627 "config": [] 00:15:58.627 }, 00:15:58.627 { 00:15:58.627 "subsystem": "iobuf", 00:15:58.627 "config": [ 00:15:58.627 { 00:15:58.627 "method": "iobuf_set_options", 00:15:58.627 "params": { 00:15:58.627 "small_pool_count": 8192, 00:15:58.627 "large_pool_count": 1024, 00:15:58.627 "small_bufsize": 8192, 00:15:58.627 "large_bufsize": 135168, 00:15:58.627 "enable_numa": false 00:15:58.627 } 00:15:58.627 } 00:15:58.627 ] 00:15:58.627 }, 00:15:58.627 { 00:15:58.627 "subsystem": "sock", 00:15:58.627 "config": [ 00:15:58.627 { 00:15:58.627 "method": "sock_set_default_impl", 00:15:58.627 "params": { 00:15:58.627 "impl_name": "posix" 00:15:58.627 } 00:15:58.627 }, 00:15:58.627 { 00:15:58.627 "method": "sock_impl_set_options", 00:15:58.627 "params": { 00:15:58.627 "impl_name": "ssl", 00:15:58.627 "recv_buf_size": 4096, 00:15:58.627 "send_buf_size": 4096, 00:15:58.627 "enable_recv_pipe": true, 00:15:58.627 "enable_quickack": false, 00:15:58.627 "enable_placement_id": 0, 00:15:58.627 "enable_zerocopy_send_server": true, 00:15:58.627 "enable_zerocopy_send_client": false, 00:15:58.627 "zerocopy_threshold": 0, 00:15:58.627 "tls_version": 0, 00:15:58.627 "enable_ktls": false 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "sock_impl_set_options", 00:15:58.628 "params": { 00:15:58.628 "impl_name": "posix", 00:15:58.628 "recv_buf_size": 2097152, 00:15:58.628 "send_buf_size": 2097152, 00:15:58.628 "enable_recv_pipe": true, 00:15:58.628 "enable_quickack": false, 00:15:58.628 "enable_placement_id": 0, 00:15:58.628 "enable_zerocopy_send_server": true, 00:15:58.628 "enable_zerocopy_send_client": false, 00:15:58.628 "zerocopy_threshold": 0, 00:15:58.628 "tls_version": 0, 00:15:58.628 "enable_ktls": false 00:15:58.628 } 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "vmd", 00:15:58.628 "config": [] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "accel", 00:15:58.628 "config": [ 00:15:58.628 { 00:15:58.628 "method": "accel_set_options", 00:15:58.628 "params": { 00:15:58.628 "small_cache_size": 128, 00:15:58.628 "large_cache_size": 16, 00:15:58.628 "task_count": 2048, 00:15:58.628 "sequence_count": 2048, 00:15:58.628 "buf_count": 2048 00:15:58.628 } 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "bdev", 00:15:58.628 "config": [ 00:15:58.628 { 00:15:58.628 "method": "bdev_set_options", 00:15:58.628 "params": { 00:15:58.628 "bdev_io_pool_size": 65535, 00:15:58.628 "bdev_io_cache_size": 256, 00:15:58.628 "bdev_auto_examine": true, 00:15:58.628 "iobuf_small_cache_size": 128, 00:15:58.628 "iobuf_large_cache_size": 16 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "bdev_raid_set_options", 00:15:58.628 "params": { 00:15:58.628 "process_window_size_kb": 1024, 00:15:58.628 "process_max_bandwidth_mb_sec": 0 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "bdev_iscsi_set_options", 00:15:58.628 "params": { 00:15:58.628 "timeout_sec": 30 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "bdev_nvme_set_options", 00:15:58.628 "params": { 00:15:58.628 "action_on_timeout": "none", 00:15:58.628 "timeout_us": 0, 00:15:58.628 "timeout_admin_us": 0, 00:15:58.628 "keep_alive_timeout_ms": 10000, 00:15:58.628 "arbitration_burst": 0, 00:15:58.628 "low_priority_weight": 0, 00:15:58.628 "medium_priority_weight": 0, 00:15:58.628 "high_priority_weight": 0, 00:15:58.628 "nvme_adminq_poll_period_us": 10000, 00:15:58.628 "nvme_ioq_poll_period_us": 0, 00:15:58.628 "io_queue_requests": 0, 00:15:58.628 "delay_cmd_submit": true, 00:15:58.628 "transport_retry_count": 4, 00:15:58.628 "bdev_retry_count": 3, 00:15:58.628 "transport_ack_timeout": 0, 00:15:58.628 "ctrlr_loss_timeout_sec": 0, 00:15:58.628 "reconnect_delay_sec": 0, 00:15:58.628 "fast_io_fail_timeout_sec": 0, 00:15:58.628 "disable_auto_failback": false, 00:15:58.628 "generate_uuids": false, 00:15:58.628 "transport_tos": 0, 00:15:58.628 "nvme_error_stat": false, 00:15:58.628 "rdma_srq_size": 0, 00:15:58.628 "io_path_stat": false, 00:15:58.628 "allow_accel_sequence": false, 00:15:58.628 "rdma_max_cq_size": 0, 00:15:58.628 "rdma_cm_event_timeout_ms": 0, 00:15:58.628 "dhchap_digests": [ 00:15:58.628 "sha256", 00:15:58.628 "sha384", 00:15:58.628 "sha512" 00:15:58.628 ], 00:15:58.628 "dhchap_dhgroups": [ 00:15:58.628 "null", 00:15:58.628 "ffdhe2048", 00:15:58.628 "ffdhe3072", 00:15:58.628 "ffdhe4096", 00:15:58.628 "ffdhe6144", 00:15:58.628 "ffdhe8192" 00:15:58.628 ], 00:15:58.628 "rdma_umr_per_io": false 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "bdev_nvme_set_hotplug", 00:15:58.628 "params": { 00:15:58.628 "period_us": 100000, 00:15:58.628 "enable": false 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "bdev_malloc_create", 00:15:58.628 "params": { 00:15:58.628 "name": "malloc0", 00:15:58.628 "num_blocks": 8192, 00:15:58.628 "block_size": 4096, 00:15:58.628 "physical_block_size": 4096, 00:15:58.628 "uuid": "27319c5c-bc9a-46da-9e3a-f0acce63f7f7", 00:15:58.628 "optimal_io_boundary": 0, 00:15:58.628 "md_size": 0, 00:15:58.628 "dif_type": 0, 00:15:58.628 "dif_is_head_of_md": false, 00:15:58.628 "dif_pi_format": 0 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "bdev_wait_for_examine" 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "scsi", 00:15:58.628 "config": null 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "scheduler", 00:15:58.628 "config": [ 00:15:58.628 { 00:15:58.628 "method": "framework_set_scheduler", 00:15:58.628 "params": { 00:15:58.628 "name": "static" 00:15:58.628 } 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "vhost_scsi", 00:15:58.628 "config": [] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "vhost_blk", 00:15:58.628 "config": [] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "ublk", 00:15:58.628 "config": [ 00:15:58.628 { 00:15:58.628 "method": "ublk_create_target", 00:15:58.628 "params": { 00:15:58.628 "cpumask": "1" 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "ublk_start_disk", 00:15:58.628 "params": { 00:15:58.628 "bdev_name": "malloc0", 00:15:58.628 "ublk_id": 0, 00:15:58.628 "num_queues": 1, 00:15:58.628 "queue_depth": 128 00:15:58.628 } 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "nbd", 00:15:58.628 "config": [] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "nvmf", 00:15:58.628 "config": [ 00:15:58.628 { 00:15:58.628 "method": "nvmf_set_config", 00:15:58.628 "params": { 00:15:58.628 "discovery_filter": "match_any", 00:15:58.628 "admin_cmd_passthru": { 00:15:58.628 "identify_ctrlr": false 00:15:58.628 }, 00:15:58.628 "dhchap_digests": [ 00:15:58.628 "sha256", 00:15:58.628 "sha384", 00:15:58.628 "sha512" 00:15:58.628 ], 00:15:58.628 "dhchap_dhgroups": [ 00:15:58.628 "null", 00:15:58.628 "ffdhe2048", 00:15:58.628 "ffdhe3072", 00:15:58.628 "ffdhe4096", 00:15:58.628 "ffdhe6144", 00:15:58.628 "ffdhe8192" 00:15:58.628 ] 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "nvmf_set_max_subsystems", 00:15:58.628 "params": { 00:15:58.628 "max_subsystems": 1024 00:15:58.628 } 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "method": "nvmf_set_crdt", 00:15:58.628 "params": { 00:15:58.628 "crdt1": 0, 00:15:58.628 "crdt2": 0, 00:15:58.628 "crdt3": 0 00:15:58.628 } 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }, 00:15:58.628 { 00:15:58.628 "subsystem": "iscsi", 00:15:58.628 "config": [ 00:15:58.628 { 00:15:58.628 "method": "iscsi_set_options", 00:15:58.628 "params": { 00:15:58.628 "node_base": "iqn.2016-06.io.spdk", 00:15:58.628 "max_sessions": 128, 00:15:58.628 "max_connections_per_session": 2, 00:15:58.628 "max_queue_depth": 64, 00:15:58.628 "default_time2wait": 2, 00:15:58.628 "default_time2retain": 20, 00:15:58.628 "first_burst_length": 8192, 00:15:58.628 "immediate_data": true, 00:15:58.628 "allow_duplicated_isid": false, 00:15:58.628 "error_recovery_level": 0, 00:15:58.628 "nop_timeout": 60, 00:15:58.628 "nop_in_interval": 30, 00:15:58.628 "disable_chap": false, 00:15:58.628 "require_chap": false, 00:15:58.628 "mutual_chap": false, 00:15:58.628 "chap_group": 0, 00:15:58.628 "max_large_datain_per_connection": 64, 00:15:58.628 "max_r2t_per_connection": 4, 00:15:58.628 "pdu_pool_size": 36864, 00:15:58.628 "immediate_data_pool_size": 16384, 00:15:58.628 "data_out_pool_size": 2048 00:15:58.628 } 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 } 00:15:58.628 ] 00:15:58.628 }' 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86314 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86314 ']' 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86314 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86314 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.628 killing process with pid 86314 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86314' 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86314 00:15:58.628 18:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86314 00:15:58.890 [2024-12-13 18:10:33.220026] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:58.890 [2024-12-13 18:10:33.251398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:58.890 [2024-12-13 18:10:33.251564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:58.890 [2024-12-13 18:10:33.258286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:58.890 [2024-12-13 18:10:33.258361] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:58.890 [2024-12-13 18:10:33.258372] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:58.890 [2024-12-13 18:10:33.258411] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:58.890 [2024-12-13 18:10:33.258580] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86352 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86352 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86352 ']' 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:59.834 18:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:59.834 "subsystems": [ 00:15:59.834 { 00:15:59.834 "subsystem": "fsdev", 00:15:59.834 "config": [ 00:15:59.834 { 00:15:59.834 "method": "fsdev_set_opts", 00:15:59.834 "params": { 00:15:59.834 "fsdev_io_pool_size": 65535, 00:15:59.834 "fsdev_io_cache_size": 256 00:15:59.834 } 00:15:59.834 } 00:15:59.834 ] 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "subsystem": "keyring", 00:15:59.834 "config": [] 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "subsystem": "iobuf", 00:15:59.834 "config": [ 00:15:59.834 { 00:15:59.834 "method": "iobuf_set_options", 00:15:59.834 "params": { 00:15:59.834 "small_pool_count": 8192, 00:15:59.834 "large_pool_count": 1024, 00:15:59.834 "small_bufsize": 8192, 00:15:59.834 "large_bufsize": 135168, 00:15:59.834 "enable_numa": false 00:15:59.834 } 00:15:59.834 } 00:15:59.834 ] 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "subsystem": "sock", 00:15:59.834 "config": [ 00:15:59.834 { 00:15:59.834 "method": "sock_set_default_impl", 00:15:59.834 "params": { 00:15:59.834 "impl_name": "posix" 00:15:59.834 } 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "method": "sock_impl_set_options", 00:15:59.834 "params": { 00:15:59.834 "impl_name": "ssl", 00:15:59.834 "recv_buf_size": 4096, 00:15:59.834 "send_buf_size": 4096, 00:15:59.834 "enable_recv_pipe": true, 00:15:59.834 "enable_quickack": false, 00:15:59.834 "enable_placement_id": 0, 00:15:59.834 "enable_zerocopy_send_server": true, 00:15:59.834 "enable_zerocopy_send_client": false, 00:15:59.834 "zerocopy_threshold": 0, 00:15:59.834 "tls_version": 0, 00:15:59.834 "enable_ktls": false 00:15:59.834 } 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "method": "sock_impl_set_options", 00:15:59.834 "params": { 00:15:59.834 "impl_name": "posix", 00:15:59.834 "recv_buf_size": 2097152, 00:15:59.834 "send_buf_size": 2097152, 00:15:59.834 "enable_recv_pipe": true, 00:15:59.834 "enable_quickack": false, 00:15:59.834 "enable_placement_id": 0, 00:15:59.834 "enable_zerocopy_send_server": true, 00:15:59.834 "enable_zerocopy_send_client": false, 00:15:59.834 "zerocopy_threshold": 0, 00:15:59.834 "tls_version": 0, 00:15:59.834 "enable_ktls": false 00:15:59.834 } 00:15:59.834 } 00:15:59.834 ] 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "subsystem": "vmd", 00:15:59.834 "config": [] 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "subsystem": "accel", 00:15:59.834 "config": [ 00:15:59.834 { 00:15:59.834 "method": "accel_set_options", 00:15:59.834 "params": { 00:15:59.834 "small_cache_size": 128, 00:15:59.834 "large_cache_size": 16, 00:15:59.834 "task_count": 2048, 00:15:59.834 "sequence_count": 2048, 00:15:59.834 "buf_count": 2048 00:15:59.834 } 00:15:59.834 } 00:15:59.834 ] 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "subsystem": "bdev", 00:15:59.834 "config": [ 00:15:59.834 { 00:15:59.834 "method": "bdev_set_options", 00:15:59.834 "params": { 00:15:59.834 "bdev_io_pool_size": 65535, 00:15:59.834 "bdev_io_cache_size": 256, 00:15:59.834 "bdev_auto_examine": true, 00:15:59.834 "iobuf_small_cache_size": 128, 00:15:59.834 "iobuf_large_cache_size": 16 00:15:59.834 } 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "method": "bdev_raid_set_options", 00:15:59.834 "params": { 00:15:59.834 "process_window_size_kb": 1024, 00:15:59.834 "process_max_bandwidth_mb_sec": 0 00:15:59.834 } 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "method": "bdev_iscsi_set_options", 00:15:59.834 "params": { 00:15:59.834 "timeout_sec": 30 00:15:59.834 } 00:15:59.834 }, 00:15:59.834 { 00:15:59.834 "method": "bdev_nvme_set_options", 00:15:59.834 "params": { 00:15:59.834 "action_on_timeout": "none", 00:15:59.834 "timeout_us": 0, 00:15:59.834 "timeout_admin_us": 0, 00:15:59.834 "keep_alive_timeout_ms": 10000, 00:15:59.834 "arbitration_burst": 0, 00:15:59.834 "low_priority_weight": 0, 00:15:59.834 "medium_priority_weight": 0, 00:15:59.834 "high_priority_weight": 0, 00:15:59.834 "nvme_adminq_poll_period_us": 10000, 00:15:59.834 "nvme_ioq_poll_period_us": 0, 00:15:59.834 "io_queue_requests": 0, 00:15:59.834 "delay_cmd_submit": true, 00:15:59.834 "transport_retry_count": 4, 00:15:59.834 "bdev_retry_count": 3, 00:15:59.835 "transport_ack_timeout": 0, 00:15:59.835 "ctrlr_loss_timeout_sec": 0, 00:15:59.835 "reconnect_delay_sec": 0, 00:15:59.835 "fast_io_fail_timeout_sec": 0, 00:15:59.835 "disable_auto_failback": false, 00:15:59.835 "generate_uuids": false, 00:15:59.835 "transport_tos": 0, 00:15:59.835 "nvme_error_stat": false, 00:15:59.835 "rdma_srq_size": 0, 00:15:59.835 "io_path_stat": false, 00:15:59.835 "allow_accel_sequence": false, 00:15:59.835 "rdma_max_cq_size": 0, 00:15:59.835 "rdma_cm_event_timeout_ms": 0, 00:15:59.835 "dhchap_digests": [ 00:15:59.835 "sha256", 00:15:59.835 "sha384", 00:15:59.835 "sha512" 00:15:59.835 ], 00:15:59.835 "dhchap_dhgroups": [ 00:15:59.835 "null", 00:15:59.835 "ffdhe2048", 00:15:59.835 "ffdhe3072", 00:15:59.835 "ffdhe4096", 00:15:59.835 "ffdhe6144", 00:15:59.835 "ffdhe8192" 00:15:59.835 ], 00:15:59.835 "rdma_umr_per_io": false 00:15:59.835 } 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "method": "bdev_nvme_set_hotplug", 00:15:59.835 "params": { 00:15:59.835 "period_us": 100000, 00:15:59.835 "enable": false 00:15:59.835 } 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "method": "bdev_malloc_create", 00:15:59.835 "params": { 00:15:59.835 "name": "malloc0", 00:15:59.835 "num_blocks": 8192, 00:15:59.835 "block_size": 4096, 00:15:59.835 "physical_block_size": 4096, 00:15:59.835 "uuid": "27319c5c-bc9a-46da-9e3a-f0acce63f7f7", 00:15:59.835 "optimal_io_boundary": 0, 00:15:59.835 "md_size": 0, 00:15:59.835 "dif_type": 0, 00:15:59.835 "dif_is_head_of_md": false, 00:15:59.835 "dif_pi_format": 0 00:15:59.835 } 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "method": "bdev_wait_for_examine" 00:15:59.835 } 00:15:59.835 ] 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "scsi", 00:15:59.835 "config": null 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "scheduler", 00:15:59.835 "config": [ 00:15:59.835 { 00:15:59.835 "method": "framework_set_scheduler", 00:15:59.835 "params": { 00:15:59.835 "name": "static" 00:15:59.835 } 00:15:59.835 } 00:15:59.835 ] 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "vhost_scsi", 00:15:59.835 "config": [] 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "vhost_blk", 00:15:59.835 "config": [] 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "ublk", 00:15:59.835 "config": [ 00:15:59.835 { 00:15:59.835 "method": "ublk_create_target", 00:15:59.835 "params": { 00:15:59.835 "cpumask": "1" 00:15:59.835 } 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "method": "ublk_start_disk", 00:15:59.835 "params": { 00:15:59.835 "bdev_name": "malloc0", 00:15:59.835 "ublk_id": 0, 00:15:59.835 "num_queues": 1, 00:15:59.835 "queue_depth": 128 00:15:59.835 } 00:15:59.835 } 00:15:59.835 ] 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "nbd", 00:15:59.835 "config": [] 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "nvmf", 00:15:59.835 "config": [ 00:15:59.835 { 00:15:59.835 "method": "nvmf_set_config", 00:15:59.835 "params": { 00:15:59.835 "discovery_filter": "match_any", 00:15:59.835 "admin_cmd_passthru": { 00:15:59.835 "identify_ctrlr": false 00:15:59.835 }, 00:15:59.835 "dhchap_digests": [ 00:15:59.835 "sha256", 00:15:59.835 "sha384", 00:15:59.835 "sha512" 00:15:59.835 ], 00:15:59.835 "dhchap_dhgroups": [ 00:15:59.835 "null", 00:15:59.835 "ffdhe2048", 00:15:59.835 "ffdhe3072", 00:15:59.835 "ffdhe4096", 00:15:59.835 "ffdhe6144", 00:15:59.835 "ffdhe8192" 00:15:59.835 ] 00:15:59.835 } 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "method": "nvmf_set_max_subsystems", 00:15:59.835 "params": { 00:15:59.835 "max_subsystems": 1024 00:15:59.835 } 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "method": "nvmf_set_crdt", 00:15:59.835 "params": { 00:15:59.835 "crdt1": 0, 00:15:59.835 "crdt2": 0, 00:15:59.835 "crdt3": 0 00:15:59.835 } 00:15:59.835 } 00:15:59.835 ] 00:15:59.835 }, 00:15:59.835 { 00:15:59.835 "subsystem": "iscsi", 00:15:59.835 "config": [ 00:15:59.835 { 00:15:59.835 "method": "iscsi_set_options", 00:15:59.835 "params": { 00:15:59.835 "node_base": "iqn.2016-06.io.spdk", 00:15:59.835 "max_sessions": 128, 00:15:59.835 "max_connections_per_session": 2, 00:15:59.835 "max_queue_depth": 64, 00:15:59.835 "default_time2wait": 2, 00:15:59.835 "default_time2retain": 20, 00:15:59.835 "first_burst_length": 8192, 00:15:59.835 "immediate_data": true, 00:15:59.835 "allow_duplicated_isid": false, 00:15:59.835 "error_recovery_level": 0, 00:15:59.835 "nop_timeout": 60, 00:15:59.835 "nop_in_interval": 30, 00:15:59.835 "disable_chap": false, 00:15:59.835 "require_chap": false, 00:15:59.835 "mutual_chap": false, 00:15:59.835 "chap_group": 0, 00:15:59.835 "max_large_datain_per_connection": 64, 00:15:59.835 "max_r2t_per_connection": 4, 00:15:59.835 "pdu_pool_size": 36864, 00:15:59.835 "immediate_data_pool_size": 16384, 00:15:59.835 "data_out_pool_size": 2048 00:15:59.835 } 00:15:59.835 } 00:15:59.835 ] 00:15:59.835 } 00:15:59.835 ] 00:15:59.835 }' 00:15:59.835 [2024-12-13 18:10:33.944596] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:59.835 [2024-12-13 18:10:33.945127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86352 ] 00:15:59.835 [2024-12-13 18:10:34.091906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.835 [2024-12-13 18:10:34.128368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.406 [2024-12-13 18:10:34.604279] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:00.406 [2024-12-13 18:10:34.604691] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:00.406 [2024-12-13 18:10:34.612412] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:00.406 [2024-12-13 18:10:34.612504] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:00.406 [2024-12-13 18:10:34.612513] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:00.406 [2024-12-13 18:10:34.612525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:00.406 [2024-12-13 18:10:34.621395] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:00.407 [2024-12-13 18:10:34.621429] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:00.407 [2024-12-13 18:10:34.628292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:00.407 [2024-12-13 18:10:34.628421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:00.407 [2024-12-13 18:10:34.645278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86352 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86352 ']' 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86352 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86352 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.667 killing process with pid 86352 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86352' 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86352 00:16:00.667 18:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86352 00:16:00.931 [2024-12-13 18:10:35.264144] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:00.931 [2024-12-13 18:10:35.300409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:00.931 [2024-12-13 18:10:35.300564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:01.192 [2024-12-13 18:10:35.308289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:01.192 [2024-12-13 18:10:35.308355] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:01.192 [2024-12-13 18:10:35.308379] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:01.192 [2024-12-13 18:10:35.308412] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:01.192 [2024-12-13 18:10:35.308579] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:01.801 18:10:35 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:01.801 00:16:01.801 real 0m4.412s 00:16:01.801 user 0m2.796s 00:16:01.801 sys 0m2.287s 00:16:01.801 18:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.801 ************************************ 00:16:01.801 END TEST test_save_ublk_config 00:16:01.801 ************************************ 00:16:01.801 18:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 18:10:35 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86408 00:16:01.801 18:10:35 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:01.802 18:10:35 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86408 00:16:01.802 18:10:35 ublk -- common/autotest_common.sh@835 -- # '[' -z 86408 ']' 00:16:01.802 18:10:35 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.802 18:10:35 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:01.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.802 18:10:35 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.802 18:10:35 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.802 18:10:35 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.802 18:10:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:01.802 [2024-12-13 18:10:36.015686] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:01.802 [2024-12-13 18:10:36.015800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86408 ] 00:16:01.802 [2024-12-13 18:10:36.155090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.102 [2024-12-13 18:10:36.178930] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.102 [2024-12-13 18:10:36.179041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.668 18:10:36 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.668 18:10:36 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:02.668 18:10:36 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:02.668 18:10:36 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:02.668 18:10:36 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.668 18:10:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.668 ************************************ 00:16:02.668 START TEST test_create_ublk 00:16:02.668 ************************************ 00:16:02.668 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:02.668 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:02.668 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.668 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.668 [2024-12-13 18:10:36.827262] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:02.668 [2024-12-13 18:10:36.828536] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:02.668 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.668 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:02.668 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:02.668 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.668 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.669 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.669 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:02.669 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:02.669 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.669 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.669 [2024-12-13 18:10:36.905365] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:02.669 [2024-12-13 18:10:36.905703] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:02.669 [2024-12-13 18:10:36.905715] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:02.669 [2024-12-13 18:10:36.905729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:02.669 [2024-12-13 18:10:36.913284] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:02.669 [2024-12-13 18:10:36.913311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:02.669 [2024-12-13 18:10:36.921272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:02.669 [2024-12-13 18:10:36.921779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:02.669 [2024-12-13 18:10:36.952266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:02.669 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.669 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:02.669 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:02.669 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:02.669 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.669 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.669 18:10:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.669 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:02.669 { 00:16:02.669 "ublk_device": "/dev/ublkb0", 00:16:02.669 "id": 0, 00:16:02.669 "queue_depth": 512, 00:16:02.669 "num_queues": 4, 00:16:02.669 "bdev_name": "Malloc0" 00:16:02.669 } 00:16:02.669 ]' 00:16:02.669 18:10:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:02.669 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:02.669 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:02.669 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:02.669 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:02.927 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:02.927 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:02.927 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:02.927 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:02.927 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:02.927 18:10:37 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:02.927 18:10:37 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:02.927 fio: verification read phase will never start because write phase uses all of runtime 00:16:02.927 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:02.927 fio-3.35 00:16:02.927 Starting 1 process 00:16:15.125 00:16:15.125 fio_test: (groupid=0, jobs=1): err= 0: pid=86447: Fri Dec 13 18:10:47 2024 00:16:15.125 write: IOPS=16.5k, BW=64.5MiB/s (67.6MB/s)(645MiB/10001msec); 0 zone resets 00:16:15.125 clat (usec): min=36, max=4051, avg=59.81, stdev=82.47 00:16:15.125 lat (usec): min=36, max=4051, avg=60.26, stdev=82.51 00:16:15.125 clat percentiles (usec): 00:16:15.125 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 49], 20.00th=[ 52], 00:16:15.125 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 58], 00:16:15.125 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 67], 95.00th=[ 73], 00:16:15.125 | 99.00th=[ 87], 99.50th=[ 101], 99.90th=[ 1188], 99.95th=[ 2507], 00:16:15.125 | 99.99th=[ 3458] 00:16:15.125 bw ( KiB/s): min=63288, max=68920, per=99.92%, avg=65949.05, stdev=1267.08, samples=19 00:16:15.125 iops : min=15822, max=17230, avg=16487.26, stdev=316.77, samples=19 00:16:15.125 lat (usec) : 50=13.56%, 100=85.93%, 250=0.33%, 500=0.04%, 750=0.01% 00:16:15.125 lat (usec) : 1000=0.02% 00:16:15.125 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:16:15.125 cpu : usr=2.86%, sys=13.81%, ctx=165021, majf=0, minf=794 00:16:15.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.125 issued rwts: total=0,165025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.125 00:16:15.125 Run status group 0 (all jobs): 00:16:15.125 WRITE: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=645MiB (676MB), run=10001-10001msec 00:16:15.125 00:16:15.125 Disk stats (read/write): 00:16:15.126 ublkb0: ios=0/163290, merge=0/0, ticks=0/8287, in_queue=8287, util=99.02% 00:16:15.126 18:10:47 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 [2024-12-13 18:10:47.364308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:15.126 [2024-12-13 18:10:47.406306] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:15.126 [2024-12-13 18:10:47.406958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:15.126 [2024-12-13 18:10:47.415297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:15.126 [2024-12-13 18:10:47.415551] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:15.126 [2024-12-13 18:10:47.415568] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 [2024-12-13 18:10:47.430363] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:15.126 request: 00:16:15.126 { 00:16:15.126 "ublk_id": 0, 00:16:15.126 "method": "ublk_stop_disk", 00:16:15.126 "req_id": 1 00:16:15.126 } 00:16:15.126 Got JSON-RPC error response 00:16:15.126 response: 00:16:15.126 { 00:16:15.126 "code": -19, 00:16:15.126 "message": "No such device" 00:16:15.126 } 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:15.126 18:10:47 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 [2024-12-13 18:10:47.439333] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:15.126 [2024-12-13 18:10:47.441200] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:15.126 [2024-12-13 18:10:47.441236] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:15.126 ************************************ 00:16:15.126 END TEST test_create_ublk 00:16:15.126 ************************************ 00:16:15.126 18:10:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:15.126 00:16:15.126 real 0m10.811s 00:16:15.126 user 0m0.579s 00:16:15.126 sys 0m1.455s 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 18:10:47 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:15.126 18:10:47 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.126 18:10:47 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.126 18:10:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 ************************************ 00:16:15.126 START TEST test_create_multi_ublk 00:16:15.126 ************************************ 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 [2024-12-13 18:10:47.669262] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:15.126 [2024-12-13 18:10:47.670380] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 [2024-12-13 18:10:47.749416] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:15.126 [2024-12-13 18:10:47.749746] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:15.126 [2024-12-13 18:10:47.749759] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:15.126 [2024-12-13 18:10:47.749765] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:15.126 [2024-12-13 18:10:47.761330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:15.126 [2024-12-13 18:10:47.761347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:15.126 [2024-12-13 18:10:47.773268] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:15.126 [2024-12-13 18:10:47.773794] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:15.126 [2024-12-13 18:10:47.820274] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 [2024-12-13 18:10:47.924356] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:15.126 [2024-12-13 18:10:47.924667] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:15.126 [2024-12-13 18:10:47.924679] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:15.126 [2024-12-13 18:10:47.924685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:15.126 [2024-12-13 18:10:47.937469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:15.126 [2024-12-13 18:10:47.937489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:15.126 [2024-12-13 18:10:47.948262] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:15.126 [2024-12-13 18:10:47.948777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:15.126 [2024-12-13 18:10:47.977270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.126 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.126 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:15.126 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:15.126 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.126 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 [2024-12-13 18:10:48.084353] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:15.127 [2024-12-13 18:10:48.084675] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:15.127 [2024-12-13 18:10:48.084688] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:15.127 [2024-12-13 18:10:48.084694] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:15.127 [2024-12-13 18:10:48.096285] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:15.127 [2024-12-13 18:10:48.096301] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:15.127 [2024-12-13 18:10:48.108275] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:15.127 [2024-12-13 18:10:48.108780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:15.127 [2024-12-13 18:10:48.119262] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 [2024-12-13 18:10:48.215362] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:15.127 [2024-12-13 18:10:48.215679] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:15.127 [2024-12-13 18:10:48.215691] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:15.127 [2024-12-13 18:10:48.215697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:15.127 [2024-12-13 18:10:48.227270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:15.127 [2024-12-13 18:10:48.227291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:15.127 [2024-12-13 18:10:48.239273] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:15.127 [2024-12-13 18:10:48.239781] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:15.127 [2024-12-13 18:10:48.252305] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:15.127 { 00:16:15.127 "ublk_device": "/dev/ublkb0", 00:16:15.127 "id": 0, 00:16:15.127 "queue_depth": 512, 00:16:15.127 "num_queues": 4, 00:16:15.127 "bdev_name": "Malloc0" 00:16:15.127 }, 00:16:15.127 { 00:16:15.127 "ublk_device": "/dev/ublkb1", 00:16:15.127 "id": 1, 00:16:15.127 "queue_depth": 512, 00:16:15.127 "num_queues": 4, 00:16:15.127 "bdev_name": "Malloc1" 00:16:15.127 }, 00:16:15.127 { 00:16:15.127 "ublk_device": "/dev/ublkb2", 00:16:15.127 "id": 2, 00:16:15.127 "queue_depth": 512, 00:16:15.127 "num_queues": 4, 00:16:15.127 "bdev_name": "Malloc2" 00:16:15.127 }, 00:16:15.127 { 00:16:15.127 "ublk_device": "/dev/ublkb3", 00:16:15.127 "id": 3, 00:16:15.127 "queue_depth": 512, 00:16:15.127 "num_queues": 4, 00:16:15.127 "bdev_name": "Malloc3" 00:16:15.127 } 00:16:15.127 ]' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.127 18:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 [2024-12-13 18:10:48.931343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:15.127 [2024-12-13 18:10:48.983265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:15.127 [2024-12-13 18:10:48.984162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:15.127 [2024-12-13 18:10:48.991272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:15.127 [2024-12-13 18:10:48.991508] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:15.127 [2024-12-13 18:10:48.991520] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 [2024-12-13 18:10:49.007358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:15.127 [2024-12-13 18:10:49.046704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:15.127 [2024-12-13 18:10:49.047906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:15.127 [2024-12-13 18:10:49.054277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:15.127 [2024-12-13 18:10:49.054517] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:15.127 [2024-12-13 18:10:49.054529] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.127 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 [2024-12-13 18:10:49.068348] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:15.127 [2024-12-13 18:10:49.109700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:15.127 [2024-12-13 18:10:49.110789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:15.127 [2024-12-13 18:10:49.117276] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:15.128 [2024-12-13 18:10:49.117503] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:15.128 [2024-12-13 18:10:49.117513] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 [2024-12-13 18:10:49.133341] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:15.128 [2024-12-13 18:10:49.171712] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:15.128 [2024-12-13 18:10:49.172707] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:15.128 [2024-12-13 18:10:49.181272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:15.128 [2024-12-13 18:10:49.181499] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:15.128 [2024-12-13 18:10:49.181509] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:15.128 [2024-12-13 18:10:49.373311] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:15.128 [2024-12-13 18:10:49.374629] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:15.128 [2024-12-13 18:10:49.374658] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.128 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:15.386 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:15.645 ************************************ 00:16:15.645 END TEST test_create_multi_ublk 00:16:15.645 ************************************ 00:16:15.645 18:10:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:15.645 00:16:15.645 real 0m2.129s 00:16:15.645 user 0m0.808s 00:16:15.645 sys 0m0.140s 00:16:15.645 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.645 18:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.645 18:10:49 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:15.645 18:10:49 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:15.645 18:10:49 ublk -- ublk/ublk.sh@130 -- # killprocess 86408 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@954 -- # '[' -z 86408 ']' 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@958 -- # kill -0 86408 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@959 -- # uname 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86408 00:16:15.645 killing process with pid 86408 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86408' 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@973 -- # kill 86408 00:16:15.645 18:10:49 ublk -- common/autotest_common.sh@978 -- # wait 86408 00:16:15.903 [2024-12-13 18:10:50.062458] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:15.903 [2024-12-13 18:10:50.062533] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:16.163 00:16:16.163 real 0m19.010s 00:16:16.163 user 0m28.224s 00:16:16.163 sys 0m8.601s 00:16:16.163 18:10:50 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.163 ************************************ 00:16:16.163 END TEST ublk 00:16:16.163 ************************************ 00:16:16.163 18:10:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.163 18:10:50 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:16.163 18:10:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:16.163 18:10:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.163 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:16:16.163 ************************************ 00:16:16.163 START TEST ublk_recovery 00:16:16.163 ************************************ 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:16.163 * Looking for test storage... 00:16:16.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.163 18:10:50 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:16.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.163 --rc genhtml_branch_coverage=1 00:16:16.163 --rc genhtml_function_coverage=1 00:16:16.163 --rc genhtml_legend=1 00:16:16.163 --rc geninfo_all_blocks=1 00:16:16.163 --rc geninfo_unexecuted_blocks=1 00:16:16.163 00:16:16.163 ' 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:16.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.163 --rc genhtml_branch_coverage=1 00:16:16.163 --rc genhtml_function_coverage=1 00:16:16.163 --rc genhtml_legend=1 00:16:16.163 --rc geninfo_all_blocks=1 00:16:16.163 --rc geninfo_unexecuted_blocks=1 00:16:16.163 00:16:16.163 ' 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:16.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.163 --rc genhtml_branch_coverage=1 00:16:16.163 --rc genhtml_function_coverage=1 00:16:16.163 --rc genhtml_legend=1 00:16:16.163 --rc geninfo_all_blocks=1 00:16:16.163 --rc geninfo_unexecuted_blocks=1 00:16:16.163 00:16:16.163 ' 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:16.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.163 --rc genhtml_branch_coverage=1 00:16:16.163 --rc genhtml_function_coverage=1 00:16:16.163 --rc genhtml_legend=1 00:16:16.163 --rc geninfo_all_blocks=1 00:16:16.163 --rc geninfo_unexecuted_blocks=1 00:16:16.163 00:16:16.163 ' 00:16:16.163 18:10:50 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:16.163 18:10:50 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:16.163 18:10:50 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:16.163 18:10:50 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86771 00:16:16.163 18:10:50 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.163 18:10:50 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86771 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86771 ']' 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.163 18:10:50 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.163 18:10:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.422 [2024-12-13 18:10:50.581210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:16.422 [2024-12-13 18:10:50.581363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86771 ] 00:16:16.422 [2024-12-13 18:10:50.722842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:16.422 [2024-12-13 18:10:50.747353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.422 [2024-12-13 18:10:50.747371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:17.356 18:10:51 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.356 [2024-12-13 18:10:51.414263] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:17.356 [2024-12-13 18:10:51.415480] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.356 18:10:51 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.356 malloc0 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.356 18:10:51 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.356 [2024-12-13 18:10:51.454371] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:17.356 [2024-12-13 18:10:51.454458] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:17.356 [2024-12-13 18:10:51.454471] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:17.356 [2024-12-13 18:10:51.454479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:17.356 [2024-12-13 18:10:51.463355] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:17.356 [2024-12-13 18:10:51.463380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:17.356 [2024-12-13 18:10:51.470268] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:17.356 [2024-12-13 18:10:51.470395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:17.356 [2024-12-13 18:10:51.493270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:17.356 1 00:16:17.356 18:10:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.356 18:10:51 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:18.290 18:10:52 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86804 00:16:18.290 18:10:52 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:18.290 18:10:52 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:18.290 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.290 fio-3.35 00:16:18.290 Starting 1 process 00:16:23.559 18:10:57 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86771 00:16:23.559 18:10:57 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:28.851 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86771 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:28.851 18:11:02 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=86916 00:16:28.851 18:11:02 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.851 18:11:02 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 86916 00:16:28.851 18:11:02 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:28.851 18:11:02 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86916 ']' 00:16:28.851 18:11:02 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.851 18:11:02 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.851 18:11:02 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.851 18:11:02 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.851 18:11:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.851 [2024-12-13 18:11:02.583455] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:28.851 [2024-12-13 18:11:02.583560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86916 ] 00:16:28.851 [2024-12-13 18:11:02.722868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:28.851 [2024-12-13 18:11:02.750034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.851 [2024-12-13 18:11:02.750132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:29.110 18:11:03 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.110 [2024-12-13 18:11:03.383262] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:29.110 [2024-12-13 18:11:03.384502] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.110 18:11:03 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.110 malloc0 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.110 18:11:03 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.110 [2024-12-13 18:11:03.423363] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:29.110 [2024-12-13 18:11:03.423397] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:29.110 [2024-12-13 18:11:03.423404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:29.110 [2024-12-13 18:11:03.431293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:29.110 [2024-12-13 18:11:03.431310] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:29.110 [2024-12-13 18:11:03.431320] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:29.110 1 00:16:29.110 [2024-12-13 18:11:03.431379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:29.110 18:11:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.110 18:11:03 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86804 00:16:29.110 [2024-12-13 18:11:03.439273] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:29.110 [2024-12-13 18:11:03.446010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:29.110 [2024-12-13 18:11:03.453461] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:29.110 [2024-12-13 18:11:03.453481] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:25.338 00:17:25.338 fio_test: (groupid=0, jobs=1): err= 0: pid=86814: Fri Dec 13 18:11:52 2024 00:17:25.338 read: IOPS=25.5k, BW=99.5MiB/s (104MB/s)(5969MiB/60001msec) 00:17:25.338 slat (nsec): min=1130, max=163878, avg=5560.70, stdev=1390.67 00:17:25.338 clat (usec): min=595, max=5954.5k, avg=2474.90, stdev=39409.43 00:17:25.338 lat (usec): min=600, max=5954.5k, avg=2480.46, stdev=39409.43 00:17:25.338 clat percentiles (usec): 00:17:25.338 | 1.00th=[ 1860], 5.00th=[ 1991], 10.00th=[ 2024], 20.00th=[ 2057], 00:17:25.338 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:17:25.338 | 70.00th=[ 2147], 80.00th=[ 2147], 90.00th=[ 2212], 95.00th=[ 3097], 00:17:25.338 | 99.00th=[ 4948], 99.50th=[ 5342], 99.90th=[ 6652], 99.95th=[ 7373], 00:17:25.338 | 99.99th=[11863] 00:17:25.338 bw ( KiB/s): min=28856, max=116304, per=100.00%, avg=112235.41, stdev=11129.06, samples=108 00:17:25.338 iops : min= 7214, max=29076, avg=28058.85, stdev=2782.27, samples=108 00:17:25.338 write: IOPS=25.4k, BW=99.4MiB/s (104MB/s)(5962MiB/60001msec); 0 zone resets 00:17:25.338 slat (nsec): min=1504, max=2458.9k, avg=5782.14, stdev=2443.47 00:17:25.338 clat (usec): min=645, max=5954.6k, avg=2541.71, stdev=37627.07 00:17:25.338 lat (usec): min=650, max=5954.6k, avg=2547.50, stdev=37627.06 00:17:25.338 clat percentiles (usec): 00:17:25.338 | 1.00th=[ 1926], 5.00th=[ 2089], 10.00th=[ 2114], 20.00th=[ 2147], 00:17:25.338 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:17:25.338 | 70.00th=[ 2245], 80.00th=[ 2245], 90.00th=[ 2311], 95.00th=[ 3032], 00:17:25.338 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 6718], 99.95th=[ 7701], 00:17:25.338 | 99.99th=[11994] 00:17:25.338 bw ( KiB/s): min=29560, max=116696, per=100.00%, avg=112085.93, stdev=11047.82, samples=108 00:17:25.338 iops : min= 7390, max=29174, avg=28021.48, stdev=2761.96, samples=108 00:17:25.338 lat (usec) : 750=0.01%, 1000=0.01% 00:17:25.338 lat (msec) : 2=3.48%, 4=93.96%, 10=2.55%, 20=0.01%, >=2000=0.01% 00:17:25.338 cpu : usr=5.58%, sys=29.34%, ctx=99882, majf=0, minf=15 00:17:25.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:25.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.338 issued rwts: total=1528130,1526286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.338 00:17:25.338 Run status group 0 (all jobs): 00:17:25.338 READ: bw=99.5MiB/s (104MB/s), 99.5MiB/s-99.5MiB/s (104MB/s-104MB/s), io=5969MiB (6259MB), run=60001-60001msec 00:17:25.338 WRITE: bw=99.4MiB/s (104MB/s), 99.4MiB/s-99.4MiB/s (104MB/s-104MB/s), io=5962MiB (6252MB), run=60001-60001msec 00:17:25.338 00:17:25.338 Disk stats (read/write): 00:17:25.338 ublkb1: ios=1525104/1523120, merge=0/0, ticks=3692243/3656824, in_queue=7349068, util=99.90% 00:17:25.338 18:11:52 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.338 [2024-12-13 18:11:52.771066] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:25.338 [2024-12-13 18:11:52.815286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:25.338 [2024-12-13 18:11:52.815460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:25.338 [2024-12-13 18:11:52.825290] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:25.338 [2024-12-13 18:11:52.825407] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:25.338 [2024-12-13 18:11:52.825414] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.338 18:11:52 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.338 [2024-12-13 18:11:52.838347] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:25.338 [2024-12-13 18:11:52.839690] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:25.338 [2024-12-13 18:11:52.839723] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.338 18:11:52 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:25.338 18:11:52 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:25.338 18:11:52 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 86916 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 86916 ']' 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 86916 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86916 00:17:25.338 killing process with pid 86916 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86916' 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@973 -- # kill 86916 00:17:25.338 18:11:52 ublk_recovery -- common/autotest_common.sh@978 -- # wait 86916 00:17:25.338 [2024-12-13 18:11:53.101675] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:25.338 [2024-12-13 18:11:53.101739] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:25.338 ************************************ 00:17:25.338 END TEST ublk_recovery 00:17:25.338 ************************************ 00:17:25.338 00:17:25.338 real 1m3.098s 00:17:25.338 user 1m39.855s 00:17:25.338 sys 0m36.605s 00:17:25.338 18:11:53 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.338 18:11:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.338 18:11:53 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:25.338 18:11:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:25.338 18:11:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.338 18:11:53 -- common/autotest_common.sh@10 -- # set +x 00:17:25.338 18:11:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:25.338 18:11:53 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:25.338 18:11:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:25.338 18:11:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.338 18:11:53 -- common/autotest_common.sh@10 -- # set +x 00:17:25.338 ************************************ 00:17:25.338 START TEST ftl 00:17:25.338 ************************************ 00:17:25.338 18:11:53 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:25.338 * Looking for test storage... 00:17:25.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.338 18:11:53 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:25.338 18:11:53 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:25.338 18:11:53 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:25.338 18:11:53 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:25.338 18:11:53 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.338 18:11:53 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.338 18:11:53 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.338 18:11:53 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.338 18:11:53 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.338 18:11:53 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.338 18:11:53 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.338 18:11:53 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.338 18:11:53 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.338 18:11:53 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.338 18:11:53 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.338 18:11:53 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:25.338 18:11:53 ftl -- scripts/common.sh@345 -- # : 1 00:17:25.338 18:11:53 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.339 18:11:53 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.339 18:11:53 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:25.339 18:11:53 ftl -- scripts/common.sh@353 -- # local d=1 00:17:25.339 18:11:53 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.339 18:11:53 ftl -- scripts/common.sh@355 -- # echo 1 00:17:25.339 18:11:53 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.339 18:11:53 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:25.339 18:11:53 ftl -- scripts/common.sh@353 -- # local d=2 00:17:25.339 18:11:53 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.339 18:11:53 ftl -- scripts/common.sh@355 -- # echo 2 00:17:25.339 18:11:53 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.339 18:11:53 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.339 18:11:53 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.339 18:11:53 ftl -- scripts/common.sh@368 -- # return 0 00:17:25.339 18:11:53 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.339 18:11:53 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.339 --rc genhtml_branch_coverage=1 00:17:25.339 --rc genhtml_function_coverage=1 00:17:25.339 --rc genhtml_legend=1 00:17:25.339 --rc geninfo_all_blocks=1 00:17:25.339 --rc geninfo_unexecuted_blocks=1 00:17:25.339 00:17:25.339 ' 00:17:25.339 18:11:53 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.339 --rc genhtml_branch_coverage=1 00:17:25.339 --rc genhtml_function_coverage=1 00:17:25.339 --rc genhtml_legend=1 00:17:25.339 --rc geninfo_all_blocks=1 00:17:25.339 --rc geninfo_unexecuted_blocks=1 00:17:25.339 00:17:25.339 ' 00:17:25.339 18:11:53 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.339 --rc genhtml_branch_coverage=1 00:17:25.339 --rc genhtml_function_coverage=1 00:17:25.339 --rc genhtml_legend=1 00:17:25.339 --rc geninfo_all_blocks=1 00:17:25.339 --rc geninfo_unexecuted_blocks=1 00:17:25.339 00:17:25.339 ' 00:17:25.339 18:11:53 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:25.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.339 --rc genhtml_branch_coverage=1 00:17:25.339 --rc genhtml_function_coverage=1 00:17:25.339 --rc genhtml_legend=1 00:17:25.339 --rc geninfo_all_blocks=1 00:17:25.339 --rc geninfo_unexecuted_blocks=1 00:17:25.339 00:17:25.339 ' 00:17:25.339 18:11:53 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:25.339 18:11:53 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:25.339 18:11:53 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.339 18:11:53 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.339 18:11:53 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:25.339 18:11:53 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.339 18:11:53 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.339 18:11:53 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:25.339 18:11:53 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:25.339 18:11:53 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.339 18:11:53 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.339 18:11:53 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:25.339 18:11:53 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:25.339 18:11:53 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.339 18:11:53 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.339 18:11:53 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:25.339 18:11:53 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:25.339 18:11:53 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.339 18:11:53 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.339 18:11:53 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:25.339 18:11:53 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:25.339 18:11:53 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.339 18:11:53 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.339 18:11:53 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.339 18:11:53 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.339 18:11:53 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:25.339 18:11:53 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:25.339 18:11:53 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.339 18:11:53 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.339 18:11:53 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.339 18:11:53 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:25.339 18:11:53 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:25.339 18:11:53 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:25.339 18:11:53 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:25.339 18:11:53 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:25.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:25.339 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:25.339 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:25.339 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:25.339 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:25.339 18:11:54 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87725 00:17:25.339 18:11:54 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87725 00:17:25.339 18:11:54 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:25.339 18:11:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 87725 ']' 00:17:25.339 18:11:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.339 18:11:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.339 18:11:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.339 18:11:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.339 18:11:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:25.339 [2024-12-13 18:11:54.273652] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:25.339 [2024-12-13 18:11:54.273756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87725 ] 00:17:25.339 [2024-12-13 18:11:54.410680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.339 [2024-12-13 18:11:54.435308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.339 18:11:55 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.339 18:11:55 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:25.339 18:11:55 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:25.339 18:11:55 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:25.339 18:11:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:25.339 18:11:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@50 -- # break 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@63 -- # break 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@66 -- # killprocess 87725 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 87725 ']' 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@958 -- # kill -0 87725 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@959 -- # uname 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87725 00:17:25.339 killing process with pid 87725 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87725' 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@973 -- # kill 87725 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@978 -- # wait 87725 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:25.339 18:11:56 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.339 18:11:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:25.339 ************************************ 00:17:25.339 START TEST ftl_fio_basic 00:17:25.339 ************************************ 00:17:25.339 18:11:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:25.339 * Looking for test storage... 00:17:25.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.339 18:11:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:25.339 18:11:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:25.339 18:11:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:25.339 18:11:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:25.339 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.340 18:11:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.340 --rc genhtml_branch_coverage=1 00:17:25.340 --rc genhtml_function_coverage=1 00:17:25.340 --rc genhtml_legend=1 00:17:25.340 --rc geninfo_all_blocks=1 00:17:25.340 --rc geninfo_unexecuted_blocks=1 00:17:25.340 00:17:25.340 ' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.340 --rc genhtml_branch_coverage=1 00:17:25.340 --rc genhtml_function_coverage=1 00:17:25.340 --rc genhtml_legend=1 00:17:25.340 --rc geninfo_all_blocks=1 00:17:25.340 --rc geninfo_unexecuted_blocks=1 00:17:25.340 00:17:25.340 ' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.340 --rc genhtml_branch_coverage=1 00:17:25.340 --rc genhtml_function_coverage=1 00:17:25.340 --rc genhtml_legend=1 00:17:25.340 --rc geninfo_all_blocks=1 00:17:25.340 --rc geninfo_unexecuted_blocks=1 00:17:25.340 00:17:25.340 ' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.340 --rc genhtml_branch_coverage=1 00:17:25.340 --rc genhtml_function_coverage=1 00:17:25.340 --rc genhtml_legend=1 00:17:25.340 --rc geninfo_all_blocks=1 00:17:25.340 --rc geninfo_unexecuted_blocks=1 00:17:25.340 00:17:25.340 ' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87842 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87842 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 87842 ']' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:25.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:25.340 [2024-12-13 18:11:57.087262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:25.340 [2024-12-13 18:11:57.087467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87842 ] 00:17:25.340 [2024-12-13 18:11:57.226855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.340 [2024-12-13 18:11:57.254229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.340 [2024-12-13 18:11:57.254450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.340 [2024-12-13 18:11:57.254485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:25.340 18:11:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:25.340 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.341 { 00:17:25.341 "name": "nvme0n1", 00:17:25.341 "aliases": [ 00:17:25.341 "b78dc0a2-ea58-41c0-8643-1a77adc048b4" 00:17:25.341 ], 00:17:25.341 "product_name": "NVMe disk", 00:17:25.341 "block_size": 4096, 00:17:25.341 "num_blocks": 1310720, 00:17:25.341 "uuid": "b78dc0a2-ea58-41c0-8643-1a77adc048b4", 00:17:25.341 "numa_id": -1, 00:17:25.341 "assigned_rate_limits": { 00:17:25.341 "rw_ios_per_sec": 0, 00:17:25.341 "rw_mbytes_per_sec": 0, 00:17:25.341 "r_mbytes_per_sec": 0, 00:17:25.341 "w_mbytes_per_sec": 0 00:17:25.341 }, 00:17:25.341 "claimed": false, 00:17:25.341 "zoned": false, 00:17:25.341 "supported_io_types": { 00:17:25.341 "read": true, 00:17:25.341 "write": true, 00:17:25.341 "unmap": true, 00:17:25.341 "flush": true, 00:17:25.341 "reset": true, 00:17:25.341 "nvme_admin": true, 00:17:25.341 "nvme_io": true, 00:17:25.341 "nvme_io_md": false, 00:17:25.341 "write_zeroes": true, 00:17:25.341 "zcopy": false, 00:17:25.341 "get_zone_info": false, 00:17:25.341 "zone_management": false, 00:17:25.341 "zone_append": false, 00:17:25.341 "compare": true, 00:17:25.341 "compare_and_write": false, 00:17:25.341 "abort": true, 00:17:25.341 "seek_hole": false, 00:17:25.341 "seek_data": false, 00:17:25.341 "copy": true, 00:17:25.341 "nvme_iov_md": false 00:17:25.341 }, 00:17:25.341 "driver_specific": { 00:17:25.341 "nvme": [ 00:17:25.341 { 00:17:25.341 "pci_address": "0000:00:11.0", 00:17:25.341 "trid": { 00:17:25.341 "trtype": "PCIe", 00:17:25.341 "traddr": "0000:00:11.0" 00:17:25.341 }, 00:17:25.341 "ctrlr_data": { 00:17:25.341 "cntlid": 0, 00:17:25.341 "vendor_id": "0x1b36", 00:17:25.341 "model_number": "QEMU NVMe Ctrl", 00:17:25.341 "serial_number": "12341", 00:17:25.341 "firmware_revision": "8.0.0", 00:17:25.341 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:25.341 "oacs": { 00:17:25.341 "security": 0, 00:17:25.341 "format": 1, 00:17:25.341 "firmware": 0, 00:17:25.341 "ns_manage": 1 00:17:25.341 }, 00:17:25.341 "multi_ctrlr": false, 00:17:25.341 "ana_reporting": false 00:17:25.341 }, 00:17:25.341 "vs": { 00:17:25.341 "nvme_version": "1.4" 00:17:25.341 }, 00:17:25.341 "ns_data": { 00:17:25.341 "id": 1, 00:17:25.341 "can_share": false 00:17:25.341 } 00:17:25.341 } 00:17:25.341 ], 00:17:25.341 "mp_policy": "active_passive" 00:17:25.341 } 00:17:25.341 } 00:17:25.341 ]' 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5b66962b-6743-45e4-b44b-98ca46b63241 00:17:25.341 18:11:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5b66962b-6743-45e4-b44b-98ca46b63241 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.341 { 00:17:25.341 "name": "e3c7f80e-933b-4ab9-b7cc-48827e5b69e3", 00:17:25.341 "aliases": [ 00:17:25.341 "lvs/nvme0n1p0" 00:17:25.341 ], 00:17:25.341 "product_name": "Logical Volume", 00:17:25.341 "block_size": 4096, 00:17:25.341 "num_blocks": 26476544, 00:17:25.341 "uuid": "e3c7f80e-933b-4ab9-b7cc-48827e5b69e3", 00:17:25.341 "assigned_rate_limits": { 00:17:25.341 "rw_ios_per_sec": 0, 00:17:25.341 "rw_mbytes_per_sec": 0, 00:17:25.341 "r_mbytes_per_sec": 0, 00:17:25.341 "w_mbytes_per_sec": 0 00:17:25.341 }, 00:17:25.341 "claimed": false, 00:17:25.341 "zoned": false, 00:17:25.341 "supported_io_types": { 00:17:25.341 "read": true, 00:17:25.341 "write": true, 00:17:25.341 "unmap": true, 00:17:25.341 "flush": false, 00:17:25.341 "reset": true, 00:17:25.341 "nvme_admin": false, 00:17:25.341 "nvme_io": false, 00:17:25.341 "nvme_io_md": false, 00:17:25.341 "write_zeroes": true, 00:17:25.341 "zcopy": false, 00:17:25.341 "get_zone_info": false, 00:17:25.341 "zone_management": false, 00:17:25.341 "zone_append": false, 00:17:25.341 "compare": false, 00:17:25.341 "compare_and_write": false, 00:17:25.341 "abort": false, 00:17:25.341 "seek_hole": true, 00:17:25.341 "seek_data": true, 00:17:25.341 "copy": false, 00:17:25.341 "nvme_iov_md": false 00:17:25.341 }, 00:17:25.341 "driver_specific": { 00:17:25.341 "lvol": { 00:17:25.341 "lvol_store_uuid": "5b66962b-6743-45e4-b44b-98ca46b63241", 00:17:25.341 "base_bdev": "nvme0n1", 00:17:25.341 "thin_provision": true, 00:17:25.341 "num_allocated_clusters": 0, 00:17:25.341 "snapshot": false, 00:17:25.341 "clone": false, 00:17:25.341 "esnap_clone": false 00:17:25.341 } 00:17:25.341 } 00:17:25.341 } 00:17:25.341 ]' 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:25.341 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.600 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.600 { 00:17:25.600 "name": "e3c7f80e-933b-4ab9-b7cc-48827e5b69e3", 00:17:25.600 "aliases": [ 00:17:25.600 "lvs/nvme0n1p0" 00:17:25.600 ], 00:17:25.600 "product_name": "Logical Volume", 00:17:25.600 "block_size": 4096, 00:17:25.601 "num_blocks": 26476544, 00:17:25.601 "uuid": "e3c7f80e-933b-4ab9-b7cc-48827e5b69e3", 00:17:25.601 "assigned_rate_limits": { 00:17:25.601 "rw_ios_per_sec": 0, 00:17:25.601 "rw_mbytes_per_sec": 0, 00:17:25.601 "r_mbytes_per_sec": 0, 00:17:25.601 "w_mbytes_per_sec": 0 00:17:25.601 }, 00:17:25.601 "claimed": false, 00:17:25.601 "zoned": false, 00:17:25.601 "supported_io_types": { 00:17:25.601 "read": true, 00:17:25.601 "write": true, 00:17:25.601 "unmap": true, 00:17:25.601 "flush": false, 00:17:25.601 "reset": true, 00:17:25.601 "nvme_admin": false, 00:17:25.601 "nvme_io": false, 00:17:25.601 "nvme_io_md": false, 00:17:25.601 "write_zeroes": true, 00:17:25.601 "zcopy": false, 00:17:25.601 "get_zone_info": false, 00:17:25.601 "zone_management": false, 00:17:25.601 "zone_append": false, 00:17:25.601 "compare": false, 00:17:25.601 "compare_and_write": false, 00:17:25.601 "abort": false, 00:17:25.601 "seek_hole": true, 00:17:25.601 "seek_data": true, 00:17:25.601 "copy": false, 00:17:25.601 "nvme_iov_md": false 00:17:25.601 }, 00:17:25.601 "driver_specific": { 00:17:25.601 "lvol": { 00:17:25.601 "lvol_store_uuid": "5b66962b-6743-45e4-b44b-98ca46b63241", 00:17:25.601 "base_bdev": "nvme0n1", 00:17:25.601 "thin_provision": true, 00:17:25.601 "num_allocated_clusters": 0, 00:17:25.601 "snapshot": false, 00:17:25.601 "clone": false, 00:17:25.601 "esnap_clone": false 00:17:25.601 } 00:17:25.601 } 00:17:25.601 } 00:17:25.601 ]' 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:25.601 18:11:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:25.859 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 00:17:25.859 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.859 { 00:17:25.859 "name": "e3c7f80e-933b-4ab9-b7cc-48827e5b69e3", 00:17:25.859 "aliases": [ 00:17:25.859 "lvs/nvme0n1p0" 00:17:25.859 ], 00:17:25.859 "product_name": "Logical Volume", 00:17:25.859 "block_size": 4096, 00:17:25.859 "num_blocks": 26476544, 00:17:25.859 "uuid": "e3c7f80e-933b-4ab9-b7cc-48827e5b69e3", 00:17:25.859 "assigned_rate_limits": { 00:17:25.859 "rw_ios_per_sec": 0, 00:17:25.859 "rw_mbytes_per_sec": 0, 00:17:25.859 "r_mbytes_per_sec": 0, 00:17:25.859 "w_mbytes_per_sec": 0 00:17:25.859 }, 00:17:25.859 "claimed": false, 00:17:25.859 "zoned": false, 00:17:25.859 "supported_io_types": { 00:17:25.859 "read": true, 00:17:25.859 "write": true, 00:17:25.859 "unmap": true, 00:17:25.859 "flush": false, 00:17:25.859 "reset": true, 00:17:25.859 "nvme_admin": false, 00:17:25.859 "nvme_io": false, 00:17:25.859 "nvme_io_md": false, 00:17:25.859 "write_zeroes": true, 00:17:25.859 "zcopy": false, 00:17:25.859 "get_zone_info": false, 00:17:25.859 "zone_management": false, 00:17:25.859 "zone_append": false, 00:17:25.859 "compare": false, 00:17:25.859 "compare_and_write": false, 00:17:25.859 "abort": false, 00:17:25.859 "seek_hole": true, 00:17:25.859 "seek_data": true, 00:17:25.859 "copy": false, 00:17:25.859 "nvme_iov_md": false 00:17:25.859 }, 00:17:25.859 "driver_specific": { 00:17:25.859 "lvol": { 00:17:25.859 "lvol_store_uuid": "5b66962b-6743-45e4-b44b-98ca46b63241", 00:17:25.859 "base_bdev": "nvme0n1", 00:17:25.859 "thin_provision": true, 00:17:25.859 "num_allocated_clusters": 0, 00:17:25.859 "snapshot": false, 00:17:25.860 "clone": false, 00:17:25.860 "esnap_clone": false 00:17:25.860 } 00:17:25.860 } 00:17:25.860 } 00:17:25.860 ]' 00:17:25.860 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:26.119 18:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e3c7f80e-933b-4ab9-b7cc-48827e5b69e3 -c nvc0n1p0 --l2p_dram_limit 60 00:17:26.119 [2024-12-13 18:12:00.469374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.469433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.119 [2024-12-13 18:12:00.469446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.119 [2024-12-13 18:12:00.469454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.469510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.469520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.119 [2024-12-13 18:12:00.469527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:26.119 [2024-12-13 18:12:00.469536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.469566] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.119 [2024-12-13 18:12:00.469831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.119 [2024-12-13 18:12:00.469854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.469864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.119 [2024-12-13 18:12:00.469880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:17:26.119 [2024-12-13 18:12:00.469888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.469951] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4c6fe9c3-6b62-4ebf-8dd2-a4509a125306 00:17:26.119 [2024-12-13 18:12:00.471231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.471270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:26.119 [2024-12-13 18:12:00.471280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:26.119 [2024-12-13 18:12:00.471287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.477932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.477959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.119 [2024-12-13 18:12:00.477979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.587 ms 00:17:26.119 [2024-12-13 18:12:00.477998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.478087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.478096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.119 [2024-12-13 18:12:00.478114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:26.119 [2024-12-13 18:12:00.478120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.478175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.478187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.119 [2024-12-13 18:12:00.478195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.119 [2024-12-13 18:12:00.478201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.478238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.119 [2024-12-13 18:12:00.479830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.479858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.119 [2024-12-13 18:12:00.479867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.598 ms 00:17:26.119 [2024-12-13 18:12:00.479875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.479913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.119 [2024-12-13 18:12:00.479922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.119 [2024-12-13 18:12:00.479929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.119 [2024-12-13 18:12:00.479939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.119 [2024-12-13 18:12:00.479962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:26.119 [2024-12-13 18:12:00.480094] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.119 [2024-12-13 18:12:00.480113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.119 [2024-12-13 18:12:00.480126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.119 [2024-12-13 18:12:00.480135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.119 [2024-12-13 18:12:00.480146] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.119 [2024-12-13 18:12:00.480152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:26.119 [2024-12-13 18:12:00.480169] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.120 [2024-12-13 18:12:00.480174] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.120 [2024-12-13 18:12:00.480181] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.120 [2024-12-13 18:12:00.480188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.120 [2024-12-13 18:12:00.480196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.120 [2024-12-13 18:12:00.480212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:17:26.120 [2024-12-13 18:12:00.480220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.120 [2024-12-13 18:12:00.480304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.120 [2024-12-13 18:12:00.480315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.120 [2024-12-13 18:12:00.480324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:26.120 [2024-12-13 18:12:00.480332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.120 [2024-12-13 18:12:00.480424] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.120 [2024-12-13 18:12:00.480434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.120 [2024-12-13 18:12:00.480440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.120 [2024-12-13 18:12:00.480462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.120 [2024-12-13 18:12:00.480481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.120 [2024-12-13 18:12:00.480495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.120 [2024-12-13 18:12:00.480503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:26.120 [2024-12-13 18:12:00.480509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.120 [2024-12-13 18:12:00.480519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.120 [2024-12-13 18:12:00.480525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:26.120 [2024-12-13 18:12:00.480532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.120 [2024-12-13 18:12:00.480548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.120 [2024-12-13 18:12:00.480572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.120 [2024-12-13 18:12:00.480593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.120 [2024-12-13 18:12:00.480612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.120 [2024-12-13 18:12:00.480635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.120 [2024-12-13 18:12:00.480655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.120 [2024-12-13 18:12:00.480668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.120 [2024-12-13 18:12:00.480675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:26.120 [2024-12-13 18:12:00.480680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.120 [2024-12-13 18:12:00.480688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.120 [2024-12-13 18:12:00.480693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:26.120 [2024-12-13 18:12:00.480700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.120 [2024-12-13 18:12:00.480713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:26.120 [2024-12-13 18:12:00.480719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480725] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.120 [2024-12-13 18:12:00.480732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.120 [2024-12-13 18:12:00.480743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.120 [2024-12-13 18:12:00.480777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.120 [2024-12-13 18:12:00.480783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.120 [2024-12-13 18:12:00.480790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.120 [2024-12-13 18:12:00.480798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.120 [2024-12-13 18:12:00.480806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.120 [2024-12-13 18:12:00.480813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.120 [2024-12-13 18:12:00.480822] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.120 [2024-12-13 18:12:00.480831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.120 [2024-12-13 18:12:00.480840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:26.120 [2024-12-13 18:12:00.480847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:26.120 [2024-12-13 18:12:00.480854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:26.120 [2024-12-13 18:12:00.480860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:26.120 [2024-12-13 18:12:00.480867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:26.120 [2024-12-13 18:12:00.480873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:26.120 [2024-12-13 18:12:00.480881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:26.120 [2024-12-13 18:12:00.480886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:26.120 [2024-12-13 18:12:00.480893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:26.120 [2024-12-13 18:12:00.480899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:26.120 [2024-12-13 18:12:00.480906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:26.120 [2024-12-13 18:12:00.480912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:26.120 [2024-12-13 18:12:00.480919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:26.120 [2024-12-13 18:12:00.480924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:26.120 [2024-12-13 18:12:00.480933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.120 [2024-12-13 18:12:00.480939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.120 [2024-12-13 18:12:00.480946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.120 [2024-12-13 18:12:00.480952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.120 [2024-12-13 18:12:00.480959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.120 [2024-12-13 18:12:00.480964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.120 [2024-12-13 18:12:00.480984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.120 [2024-12-13 18:12:00.480991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.120 [2024-12-13 18:12:00.481000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:17:26.120 [2024-12-13 18:12:00.481007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.120 [2024-12-13 18:12:00.481073] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:26.120 [2024-12-13 18:12:00.481239] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:28.667 [2024-12-13 18:12:02.557593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.557647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:28.667 [2024-12-13 18:12:02.557663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2076.510 ms 00:17:28.667 [2024-12-13 18:12:02.557672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.568508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.568706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.667 [2024-12-13 18:12:02.568732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.734 ms 00:17:28.667 [2024-12-13 18:12:02.568740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.568843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.568852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.667 [2024-12-13 18:12:02.568878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:28.667 [2024-12-13 18:12:02.568886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.589114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.589171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.667 [2024-12-13 18:12:02.589197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.157 ms 00:17:28.667 [2024-12-13 18:12:02.589210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.589299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.589317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.667 [2024-12-13 18:12:02.589334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.667 [2024-12-13 18:12:02.589346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.589861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.589917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.667 [2024-12-13 18:12:02.589936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:17:28.667 [2024-12-13 18:12:02.589954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.590154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.590169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.667 [2024-12-13 18:12:02.590184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:28.667 [2024-12-13 18:12:02.590196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.598364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.598404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.667 [2024-12-13 18:12:02.598421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.130 ms 00:17:28.667 [2024-12-13 18:12:02.598437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.607692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:28.667 [2024-12-13 18:12:02.625216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.625264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.667 [2024-12-13 18:12:02.625286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.675 ms 00:17:28.667 [2024-12-13 18:12:02.625298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.656653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.656691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:28.667 [2024-12-13 18:12:02.656712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.321 ms 00:17:28.667 [2024-12-13 18:12:02.656723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.656910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.656926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.667 [2024-12-13 18:12:02.656934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:28.667 [2024-12-13 18:12:02.656955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.659718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.659755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:28.667 [2024-12-13 18:12:02.659765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.729 ms 00:17:28.667 [2024-12-13 18:12:02.659775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.662190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.662225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:28.667 [2024-12-13 18:12:02.662235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.373 ms 00:17:28.667 [2024-12-13 18:12:02.662259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.662563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.662715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.667 [2024-12-13 18:12:02.662730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:28.667 [2024-12-13 18:12:02.662743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.684376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.684416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:28.667 [2024-12-13 18:12:02.684427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.603 ms 00:17:28.667 [2024-12-13 18:12:02.684447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.688512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.688547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:28.667 [2024-12-13 18:12:02.688557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.988 ms 00:17:28.667 [2024-12-13 18:12:02.688567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.691320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.691352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:28.667 [2024-12-13 18:12:02.691361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:17:28.667 [2024-12-13 18:12:02.691370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.694382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.694417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.667 [2024-12-13 18:12:02.694427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:17:28.667 [2024-12-13 18:12:02.694439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.694488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.694501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.667 [2024-12-13 18:12:02.694512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:28.667 [2024-12-13 18:12:02.694523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.694603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.667 [2024-12-13 18:12:02.694616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.667 [2024-12-13 18:12:02.694629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:28.667 [2024-12-13 18:12:02.694639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.667 [2024-12-13 18:12:02.695839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2225.979 ms, result 0 00:17:28.667 { 00:17:28.667 "name": "ftl0", 00:17:28.667 "uuid": "4c6fe9c3-6b62-4ebf-8dd2-a4509a125306" 00:17:28.667 } 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:28.667 18:12:02 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:28.929 [ 00:17:28.929 { 00:17:28.929 "name": "ftl0", 00:17:28.929 "aliases": [ 00:17:28.929 "4c6fe9c3-6b62-4ebf-8dd2-a4509a125306" 00:17:28.929 ], 00:17:28.929 "product_name": "FTL disk", 00:17:28.929 "block_size": 4096, 00:17:28.929 "num_blocks": 20971520, 00:17:28.929 "uuid": "4c6fe9c3-6b62-4ebf-8dd2-a4509a125306", 00:17:28.929 "assigned_rate_limits": { 00:17:28.929 "rw_ios_per_sec": 0, 00:17:28.929 "rw_mbytes_per_sec": 0, 00:17:28.929 "r_mbytes_per_sec": 0, 00:17:28.929 "w_mbytes_per_sec": 0 00:17:28.929 }, 00:17:28.929 "claimed": false, 00:17:28.929 "zoned": false, 00:17:28.929 "supported_io_types": { 00:17:28.929 "read": true, 00:17:28.929 "write": true, 00:17:28.929 "unmap": true, 00:17:28.929 "flush": true, 00:17:28.929 "reset": false, 00:17:28.929 "nvme_admin": false, 00:17:28.929 "nvme_io": false, 00:17:28.929 "nvme_io_md": false, 00:17:28.929 "write_zeroes": true, 00:17:28.929 "zcopy": false, 00:17:28.929 "get_zone_info": false, 00:17:28.929 "zone_management": false, 00:17:28.929 "zone_append": false, 00:17:28.929 "compare": false, 00:17:28.929 "compare_and_write": false, 00:17:28.929 "abort": false, 00:17:28.929 "seek_hole": false, 00:17:28.929 "seek_data": false, 00:17:28.929 "copy": false, 00:17:28.929 "nvme_iov_md": false 00:17:28.929 }, 00:17:28.929 "driver_specific": { 00:17:28.929 "ftl": { 00:17:28.929 "base_bdev": "e3c7f80e-933b-4ab9-b7cc-48827e5b69e3", 00:17:28.929 "cache": "nvc0n1p0" 00:17:28.929 } 00:17:28.929 } 00:17:28.929 } 00:17:28.929 ] 00:17:28.929 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:28.929 18:12:03 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:28.929 18:12:03 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:28.929 18:12:03 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:28.929 18:12:03 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:29.190 [2024-12-13 18:12:03.458598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.458645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.190 [2024-12-13 18:12:03.458659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.190 [2024-12-13 18:12:03.458665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.458700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.190 [2024-12-13 18:12:03.459238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.459280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.190 [2024-12-13 18:12:03.459291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:17:29.190 [2024-12-13 18:12:03.459300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.459688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.459704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.190 [2024-12-13 18:12:03.459712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:29.190 [2024-12-13 18:12:03.459720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.462148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.462317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.190 [2024-12-13 18:12:03.462330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:17:29.190 [2024-12-13 18:12:03.462339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.467094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.467166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.190 [2024-12-13 18:12:03.467237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.732 ms 00:17:29.190 [2024-12-13 18:12:03.467271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.468824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.468920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.190 [2024-12-13 18:12:03.468961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:17:29.190 [2024-12-13 18:12:03.468981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.472830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.472933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.190 [2024-12-13 18:12:03.472980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:17:29.190 [2024-12-13 18:12:03.473000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.473151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.473179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.190 [2024-12-13 18:12:03.473197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:29.190 [2024-12-13 18:12:03.473239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.474573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.474660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:29.190 [2024-12-13 18:12:03.474701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:17:29.190 [2024-12-13 18:12:03.474720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.475759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.475849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:29.190 [2024-12-13 18:12:03.475892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:17:29.190 [2024-12-13 18:12:03.475911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.476710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.476798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.190 [2024-12-13 18:12:03.476845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:17:29.190 [2024-12-13 18:12:03.476865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.477656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.190 [2024-12-13 18:12:03.477748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.190 [2024-12-13 18:12:03.477789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:17:29.190 [2024-12-13 18:12:03.477807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.190 [2024-12-13 18:12:03.477848] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.190 [2024-12-13 18:12:03.477949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.190 [2024-12-13 18:12:03.477978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.190 [2024-12-13 18:12:03.478002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.478981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.479902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.480983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.191 [2024-12-13 18:12:03.481287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.192 [2024-12-13 18:12:03.481799] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.192 [2024-12-13 18:12:03.481816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c6fe9c3-6b62-4ebf-8dd2-a4509a125306 00:17:29.192 [2024-12-13 18:12:03.481872] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.192 [2024-12-13 18:12:03.481888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.192 [2024-12-13 18:12:03.481904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.192 [2024-12-13 18:12:03.481920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.192 [2024-12-13 18:12:03.481936] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.192 [2024-12-13 18:12:03.481963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.192 [2024-12-13 18:12:03.482016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.192 [2024-12-13 18:12:03.482029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.192 [2024-12-13 18:12:03.482045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.192 [2024-12-13 18:12:03.482059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.192 [2024-12-13 18:12:03.482076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.192 [2024-12-13 18:12:03.482119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.212 ms 00:17:29.192 [2024-12-13 18:12:03.482129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.483887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.192 [2024-12-13 18:12:03.483985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.192 [2024-12-13 18:12:03.483996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:17:29.192 [2024-12-13 18:12:03.484014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.484125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.192 [2024-12-13 18:12:03.484138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.192 [2024-12-13 18:12:03.484145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:29.192 [2024-12-13 18:12:03.484156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.490157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.490188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.192 [2024-12-13 18:12:03.490196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.490204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.490270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.490281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.192 [2024-12-13 18:12:03.490288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.490299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.490374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.490388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.192 [2024-12-13 18:12:03.490395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.490403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.490434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.490442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.192 [2024-12-13 18:12:03.490449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.490457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.501916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.501956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.192 [2024-12-13 18:12:03.501965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.501973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.511177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.192 [2024-12-13 18:12:03.511202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.511212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.511313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.192 [2024-12-13 18:12:03.511320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.511328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.511388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.192 [2024-12-13 18:12:03.511395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.511404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.511496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.192 [2024-12-13 18:12:03.511503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.511510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.511562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:29.192 [2024-12-13 18:12:03.511569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.511577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.511632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.192 [2024-12-13 18:12:03.511639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.511646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.192 [2024-12-13 18:12:03.511709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.192 [2024-12-13 18:12:03.511717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.192 [2024-12-13 18:12:03.511725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.192 [2024-12-13 18:12:03.511902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.254 ms, result 0 00:17:29.192 true 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87842 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 87842 ']' 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 87842 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87842 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87842' 00:17:29.192 killing process with pid 87842 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 87842 00:17:29.192 18:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 87842 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:39.273 18:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:39.273 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:39.273 fio-3.35 00:17:39.273 Starting 1 thread 00:17:44.557 00:17:44.557 test: (groupid=0, jobs=1): err= 0: pid=88004: Fri Dec 13 18:12:17 2024 00:17:44.557 read: IOPS=835, BW=55.5MiB/s (58.2MB/s)(255MiB/4586msec) 00:17:44.557 slat (nsec): min=3237, max=67364, avg=7351.24, stdev=3558.04 00:17:44.557 clat (usec): min=257, max=1656, avg=539.18, stdev=221.19 00:17:44.557 lat (usec): min=261, max=1662, avg=546.53, stdev=222.06 00:17:44.557 clat percentiles (usec): 00:17:44.557 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 306], 20.00th=[ 314], 00:17:44.558 | 30.00th=[ 334], 40.00th=[ 461], 50.00th=[ 529], 60.00th=[ 553], 00:17:44.558 | 70.00th=[ 578], 80.00th=[ 775], 90.00th=[ 914], 95.00th=[ 938], 00:17:44.558 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1418], 99.95th=[ 1434], 00:17:44.558 | 99.99th=[ 1663] 00:17:44.558 write: IOPS=842, BW=55.9MiB/s (58.6MB/s)(256MiB/4579msec); 0 zone resets 00:17:44.558 slat (usec): min=14, max=268, avg=27.84, stdev= 8.23 00:17:44.558 clat (usec): min=273, max=2077, avg=602.92, stdev=276.50 00:17:44.558 lat (usec): min=301, max=2102, avg=630.77, stdev=277.00 00:17:44.558 clat percentiles (usec): 00:17:44.558 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:17:44.558 | 30.00th=[ 363], 40.00th=[ 537], 50.00th=[ 586], 60.00th=[ 635], 00:17:44.558 | 70.00th=[ 652], 80.00th=[ 824], 90.00th=[ 971], 95.00th=[ 1037], 00:17:44.558 | 99.00th=[ 1696], 99.50th=[ 1729], 99.90th=[ 1975], 99.95th=[ 2040], 00:17:44.558 | 99.99th=[ 2073] 00:17:44.558 bw ( KiB/s): min=36584, max=99416, per=100.00%, avg=57633.78, stdev=18041.13, samples=9 00:17:44.558 iops : min= 538, max= 1462, avg=847.56, stdev=265.31, samples=9 00:17:44.558 lat (usec) : 500=41.72%, 750=37.35%, 1000=16.06% 00:17:44.558 lat (msec) : 2=4.83%, 4=0.04% 00:17:44.558 cpu : usr=98.84%, sys=0.15%, ctx=8, majf=0, minf=1324 00:17:44.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:44.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.558 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.558 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:44.558 00:17:44.558 Run status group 0 (all jobs): 00:17:44.558 READ: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=255MiB (267MB), run=4586-4586msec 00:17:44.558 WRITE: bw=55.9MiB/s (58.6MB/s), 55.9MiB/s-55.9MiB/s (58.6MB/s-58.6MB/s), io=256MiB (269MB), run=4579-4579msec 00:17:44.558 ----------------------------------------------------- 00:17:44.558 Suppressions used: 00:17:44.558 count bytes template 00:17:44.558 1 5 /usr/src/fio/parse.c 00:17:44.558 1 8 libtcmalloc_minimal.so 00:17:44.558 1 904 libcrypto.so 00:17:44.558 ----------------------------------------------------- 00:17:44.558 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:44.558 18:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:44.558 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:44.558 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:44.558 fio-3.35 00:17:44.558 Starting 2 threads 00:18:11.126 00:18:11.126 first_half: (groupid=0, jobs=1): err= 0: pid=88097: Fri Dec 13 18:12:44 2024 00:18:11.126 read: IOPS=2586, BW=10.1MiB/s (10.6MB/s)(255MiB/25221msec) 00:18:11.126 slat (nsec): min=3127, max=31959, avg=4793.02, stdev=1384.28 00:18:11.126 clat (usec): min=752, max=295699, avg=34517.88, stdev=18633.28 00:18:11.126 lat (usec): min=756, max=295702, avg=34522.67, stdev=18633.33 00:18:11.126 clat percentiles (msec): 00:18:11.126 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 30], 00:18:11.126 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:11.126 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 49], 00:18:11.126 | 99.00th=[ 142], 99.50th=[ 176], 99.90th=[ 232], 99.95th=[ 257], 00:18:11.126 | 99.99th=[ 288] 00:18:11.126 write: IOPS=3463, BW=13.5MiB/s (14.2MB/s)(256MiB/18924msec); 0 zone resets 00:18:11.126 slat (usec): min=3, max=1589, avg= 6.08, stdev=10.83 00:18:11.126 clat (usec): min=375, max=110491, avg=14880.40, stdev=25496.13 00:18:11.126 lat (usec): min=384, max=110496, avg=14886.48, stdev=25496.00 00:18:11.126 clat percentiles (usec): 00:18:11.126 | 1.00th=[ 791], 5.00th=[ 1057], 10.00th=[ 1237], 20.00th=[ 1680], 00:18:11.126 | 30.00th=[ 2147], 40.00th=[ 3261], 50.00th=[ 5211], 60.00th=[ 6587], 00:18:11.126 | 70.00th=[ 8848], 80.00th=[ 11207], 90.00th=[ 67634], 95.00th=[ 85459], 00:18:11.126 | 99.00th=[ 91751], 99.50th=[ 95945], 99.90th=[103285], 99.95th=[105382], 00:18:11.126 | 99.99th=[109577] 00:18:11.126 bw ( KiB/s): min= 1400, max=37048, per=74.21%, avg=20164.92, stdev=10631.97, samples=26 00:18:11.126 iops : min= 350, max= 9262, avg=5041.23, stdev=2657.99, samples=26 00:18:11.126 lat (usec) : 500=0.01%, 750=0.34%, 1000=1.50% 00:18:11.126 lat (msec) : 2=11.80%, 4=8.32%, 10=16.94%, 20=4.82%, 50=47.88% 00:18:11.126 lat (msec) : 100=7.40%, 250=0.96%, 500=0.03% 00:18:11.126 cpu : usr=99.27%, sys=0.10%, ctx=185, majf=0, minf=5577 00:18:11.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:11.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.126 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:11.126 issued rwts: total=65236,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:11.126 second_half: (groupid=0, jobs=1): err= 0: pid=88098: Fri Dec 13 18:12:44 2024 00:18:11.126 read: IOPS=2571, BW=10.0MiB/s (10.5MB/s)(254MiB/25337msec) 00:18:11.126 slat (nsec): min=3104, max=26598, avg=4055.58, stdev=847.51 00:18:11.126 clat (usec): min=665, max=285225, avg=34640.56, stdev=21879.67 00:18:11.126 lat (usec): min=669, max=285229, avg=34644.61, stdev=21879.77 00:18:11.126 clat percentiles (msec): 00:18:11.126 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 30], 00:18:11.126 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:11.126 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 46], 00:18:11.126 | 99.00th=[ 161], 99.50th=[ 197], 99.90th=[ 259], 99.95th=[ 264], 00:18:11.126 | 99.99th=[ 271] 00:18:11.126 write: IOPS=3396, BW=13.3MiB/s (13.9MB/s)(256MiB/19296msec); 0 zone resets 00:18:11.126 slat (usec): min=3, max=1712, avg= 5.71, stdev= 7.78 00:18:11.126 clat (usec): min=347, max=110606, avg=15021.08, stdev=26149.24 00:18:11.126 lat (usec): min=354, max=110611, avg=15026.80, stdev=26149.26 00:18:11.126 clat percentiles (usec): 00:18:11.126 | 1.00th=[ 758], 5.00th=[ 1004], 10.00th=[ 1156], 20.00th=[ 1467], 00:18:11.126 | 30.00th=[ 1844], 40.00th=[ 2212], 50.00th=[ 3195], 60.00th=[ 5669], 00:18:11.126 | 70.00th=[ 8848], 80.00th=[ 11338], 90.00th=[ 68682], 95.00th=[ 86508], 00:18:11.126 | 99.00th=[ 93848], 99.50th=[ 96994], 99.90th=[104334], 99.95th=[105382], 00:18:11.126 | 99.99th=[109577] 00:18:11.126 bw ( KiB/s): min= 1152, max=50616, per=83.90%, avg=22795.13, stdev=12833.79, samples=23 00:18:11.126 iops : min= 288, max=12654, avg=5698.78, stdev=3208.45, samples=23 00:18:11.126 lat (usec) : 500=0.01%, 750=0.45%, 1000=1.97% 00:18:11.126 lat (msec) : 2=15.00%, 4=9.76%, 10=11.60%, 20=4.73%, 50=48.12% 00:18:11.126 lat (msec) : 100=7.10%, 250=1.17%, 500=0.07% 00:18:11.126 cpu : usr=99.23%, sys=0.17%, ctx=75, majf=0, minf=5551 00:18:11.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:11.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.126 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:11.126 issued rwts: total=65151,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:11.126 00:18:11.126 Run status group 0 (all jobs): 00:18:11.126 READ: bw=20.1MiB/s (21.1MB/s), 10.0MiB/s-10.1MiB/s (10.5MB/s-10.6MB/s), io=509MiB (534MB), run=25221-25337msec 00:18:11.126 WRITE: bw=26.5MiB/s (27.8MB/s), 13.3MiB/s-13.5MiB/s (13.9MB/s-14.2MB/s), io=512MiB (537MB), run=18924-19296msec 00:18:12.068 ----------------------------------------------------- 00:18:12.068 Suppressions used: 00:18:12.068 count bytes template 00:18:12.068 2 10 /usr/src/fio/parse.c 00:18:12.069 1 96 /usr/src/fio/iolog.c 00:18:12.069 1 8 libtcmalloc_minimal.so 00:18:12.069 1 904 libcrypto.so 00:18:12.069 ----------------------------------------------------- 00:18:12.069 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.069 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.329 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.329 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.329 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:12.329 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.329 18:12:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:12.329 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:12.329 fio-3.35 00:18:12.329 Starting 1 thread 00:18:27.243 00:18:27.243 test: (groupid=0, jobs=1): err= 0: pid=88417: Fri Dec 13 18:13:01 2024 00:18:27.243 read: IOPS=7866, BW=30.7MiB/s (32.2MB/s)(255MiB/8289msec) 00:18:27.243 slat (nsec): min=3034, max=46665, avg=3467.12, stdev=748.05 00:18:27.243 clat (usec): min=1127, max=35124, avg=16265.30, stdev=2316.84 00:18:27.243 lat (usec): min=1130, max=35128, avg=16268.77, stdev=2316.87 00:18:27.243 clat percentiles (usec): 00:18:27.243 | 1.00th=[13960], 5.00th=[14484], 10.00th=[14746], 20.00th=[15008], 00:18:27.243 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:18:27.243 | 70.00th=[16057], 80.00th=[16581], 90.00th=[19792], 95.00th=[21627], 00:18:27.243 | 99.00th=[24773], 99.50th=[27132], 99.90th=[32113], 99.95th=[34341], 00:18:27.243 | 99.99th=[34866] 00:18:27.243 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(256MiB/5698msec); 0 zone resets 00:18:27.243 slat (usec): min=4, max=2461, avg= 8.16, stdev=12.87 00:18:27.243 clat (usec): min=461, max=63475, avg=11064.00, stdev=15318.97 00:18:27.243 lat (usec): min=469, max=63482, avg=11072.16, stdev=15319.07 00:18:27.243 clat percentiles (usec): 00:18:27.243 | 1.00th=[ 775], 5.00th=[ 1139], 10.00th=[ 1385], 20.00th=[ 1680], 00:18:27.243 | 30.00th=[ 1975], 40.00th=[ 2900], 50.00th=[ 5473], 60.00th=[ 6521], 00:18:27.243 | 70.00th=[ 7898], 80.00th=[11469], 90.00th=[43254], 95.00th=[49021], 00:18:27.243 | 99.00th=[55313], 99.50th=[56886], 99.90th=[60031], 99.95th=[61080], 00:18:27.243 | 99.99th=[63177] 00:18:27.243 bw ( KiB/s): min=14008, max=83240, per=94.97%, avg=43690.67, stdev=18616.99, samples=12 00:18:27.243 iops : min= 3502, max=20810, avg=10922.67, stdev=4654.25, samples=12 00:18:27.243 lat (usec) : 500=0.01%, 750=0.42%, 1000=1.07% 00:18:27.243 lat (msec) : 2=13.84%, 4=5.60%, 10=17.97%, 20=48.55%, 50=10.53% 00:18:27.243 lat (msec) : 100=2.01% 00:18:27.243 cpu : usr=98.82%, sys=0.26%, ctx=30, majf=0, minf=5575 00:18:27.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:27.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.243 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.243 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.243 00:18:27.243 Run status group 0 (all jobs): 00:18:27.243 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=255MiB (267MB), run=8289-8289msec 00:18:27.243 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=256MiB (268MB), run=5698-5698msec 00:18:27.814 ----------------------------------------------------- 00:18:27.814 Suppressions used: 00:18:27.814 count bytes template 00:18:27.814 1 5 /usr/src/fio/parse.c 00:18:27.814 2 192 /usr/src/fio/iolog.c 00:18:27.814 1 8 libtcmalloc_minimal.so 00:18:27.814 1 904 libcrypto.so 00:18:27.814 ----------------------------------------------------- 00:18:27.814 00:18:27.814 18:13:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:27.814 18:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.814 18:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:28.076 Remove shared memory files 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70930 /dev/shm/spdk_tgt_trace.pid86771 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:28.076 ************************************ 00:18:28.076 END TEST ftl_fio_basic 00:18:28.076 ************************************ 00:18:28.076 00:18:28.076 real 1m5.374s 00:18:28.076 user 2m20.744s 00:18:28.076 sys 0m2.897s 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.076 18:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:28.076 18:13:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:28.076 18:13:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:28.076 18:13:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.076 18:13:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:28.076 ************************************ 00:18:28.076 START TEST ftl_bdevperf 00:18:28.076 ************************************ 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:28.076 * Looking for test storage... 00:18:28.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:28.076 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:28.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.076 --rc genhtml_branch_coverage=1 00:18:28.077 --rc genhtml_function_coverage=1 00:18:28.077 --rc genhtml_legend=1 00:18:28.077 --rc geninfo_all_blocks=1 00:18:28.077 --rc geninfo_unexecuted_blocks=1 00:18:28.077 00:18:28.077 ' 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:28.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.077 --rc genhtml_branch_coverage=1 00:18:28.077 --rc genhtml_function_coverage=1 00:18:28.077 --rc genhtml_legend=1 00:18:28.077 --rc geninfo_all_blocks=1 00:18:28.077 --rc geninfo_unexecuted_blocks=1 00:18:28.077 00:18:28.077 ' 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:28.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.077 --rc genhtml_branch_coverage=1 00:18:28.077 --rc genhtml_function_coverage=1 00:18:28.077 --rc genhtml_legend=1 00:18:28.077 --rc geninfo_all_blocks=1 00:18:28.077 --rc geninfo_unexecuted_blocks=1 00:18:28.077 00:18:28.077 ' 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:28.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.077 --rc genhtml_branch_coverage=1 00:18:28.077 --rc genhtml_function_coverage=1 00:18:28.077 --rc genhtml_legend=1 00:18:28.077 --rc geninfo_all_blocks=1 00:18:28.077 --rc geninfo_unexecuted_blocks=1 00:18:28.077 00:18:28.077 ' 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:28.077 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88655 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88655 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88655 ']' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:28.338 18:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:28.338 [2024-12-13 18:13:02.525725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:28.338 [2024-12-13 18:13:02.526129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88655 ] 00:18:28.338 [2024-12-13 18:13:02.673168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.338 [2024-12-13 18:13:02.701672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.282 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.282 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:29.282 18:13:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:29.282 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:29.282 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:29.282 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:29.282 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:29.283 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:29.542 { 00:18:29.542 "name": "nvme0n1", 00:18:29.542 "aliases": [ 00:18:29.542 "62315adf-ee66-4825-ba39-5f48fbf5a2b3" 00:18:29.542 ], 00:18:29.542 "product_name": "NVMe disk", 00:18:29.542 "block_size": 4096, 00:18:29.542 "num_blocks": 1310720, 00:18:29.542 "uuid": "62315adf-ee66-4825-ba39-5f48fbf5a2b3", 00:18:29.542 "numa_id": -1, 00:18:29.542 "assigned_rate_limits": { 00:18:29.542 "rw_ios_per_sec": 0, 00:18:29.542 "rw_mbytes_per_sec": 0, 00:18:29.542 "r_mbytes_per_sec": 0, 00:18:29.542 "w_mbytes_per_sec": 0 00:18:29.542 }, 00:18:29.542 "claimed": true, 00:18:29.542 "claim_type": "read_many_write_one", 00:18:29.542 "zoned": false, 00:18:29.542 "supported_io_types": { 00:18:29.542 "read": true, 00:18:29.542 "write": true, 00:18:29.542 "unmap": true, 00:18:29.542 "flush": true, 00:18:29.542 "reset": true, 00:18:29.542 "nvme_admin": true, 00:18:29.542 "nvme_io": true, 00:18:29.542 "nvme_io_md": false, 00:18:29.542 "write_zeroes": true, 00:18:29.542 "zcopy": false, 00:18:29.542 "get_zone_info": false, 00:18:29.542 "zone_management": false, 00:18:29.542 "zone_append": false, 00:18:29.542 "compare": true, 00:18:29.542 "compare_and_write": false, 00:18:29.542 "abort": true, 00:18:29.542 "seek_hole": false, 00:18:29.542 "seek_data": false, 00:18:29.542 "copy": true, 00:18:29.542 "nvme_iov_md": false 00:18:29.542 }, 00:18:29.542 "driver_specific": { 00:18:29.542 "nvme": [ 00:18:29.542 { 00:18:29.542 "pci_address": "0000:00:11.0", 00:18:29.542 "trid": { 00:18:29.542 "trtype": "PCIe", 00:18:29.542 "traddr": "0000:00:11.0" 00:18:29.542 }, 00:18:29.542 "ctrlr_data": { 00:18:29.542 "cntlid": 0, 00:18:29.542 "vendor_id": "0x1b36", 00:18:29.542 "model_number": "QEMU NVMe Ctrl", 00:18:29.542 "serial_number": "12341", 00:18:29.542 "firmware_revision": "8.0.0", 00:18:29.542 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:29.542 "oacs": { 00:18:29.542 "security": 0, 00:18:29.542 "format": 1, 00:18:29.542 "firmware": 0, 00:18:29.542 "ns_manage": 1 00:18:29.542 }, 00:18:29.542 "multi_ctrlr": false, 00:18:29.542 "ana_reporting": false 00:18:29.542 }, 00:18:29.542 "vs": { 00:18:29.542 "nvme_version": "1.4" 00:18:29.542 }, 00:18:29.542 "ns_data": { 00:18:29.542 "id": 1, 00:18:29.542 "can_share": false 00:18:29.542 } 00:18:29.542 } 00:18:29.542 ], 00:18:29.542 "mp_policy": "active_passive" 00:18:29.542 } 00:18:29.542 } 00:18:29.542 ]' 00:18:29.542 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:29.802 18:13:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:30.063 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5b66962b-6743-45e4-b44b-98ca46b63241 00:18:30.063 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:30.063 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b66962b-6743-45e4-b44b-98ca46b63241 00:18:30.063 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:30.324 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9d97eed0-1c4c-4999-9ee3-12fa7dfc149d 00:18:30.324 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9d97eed0-1c4c-4999-9ee3-12fa7dfc149d 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:30.586 18:13:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:30.848 { 00:18:30.848 "name": "d99b0ec2-bbb4-420d-ada2-c4394dfe0751", 00:18:30.848 "aliases": [ 00:18:30.848 "lvs/nvme0n1p0" 00:18:30.848 ], 00:18:30.848 "product_name": "Logical Volume", 00:18:30.848 "block_size": 4096, 00:18:30.848 "num_blocks": 26476544, 00:18:30.848 "uuid": "d99b0ec2-bbb4-420d-ada2-c4394dfe0751", 00:18:30.848 "assigned_rate_limits": { 00:18:30.848 "rw_ios_per_sec": 0, 00:18:30.848 "rw_mbytes_per_sec": 0, 00:18:30.848 "r_mbytes_per_sec": 0, 00:18:30.848 "w_mbytes_per_sec": 0 00:18:30.848 }, 00:18:30.848 "claimed": false, 00:18:30.848 "zoned": false, 00:18:30.848 "supported_io_types": { 00:18:30.848 "read": true, 00:18:30.848 "write": true, 00:18:30.848 "unmap": true, 00:18:30.848 "flush": false, 00:18:30.848 "reset": true, 00:18:30.848 "nvme_admin": false, 00:18:30.848 "nvme_io": false, 00:18:30.848 "nvme_io_md": false, 00:18:30.848 "write_zeroes": true, 00:18:30.848 "zcopy": false, 00:18:30.848 "get_zone_info": false, 00:18:30.848 "zone_management": false, 00:18:30.848 "zone_append": false, 00:18:30.848 "compare": false, 00:18:30.848 "compare_and_write": false, 00:18:30.848 "abort": false, 00:18:30.848 "seek_hole": true, 00:18:30.848 "seek_data": true, 00:18:30.848 "copy": false, 00:18:30.848 "nvme_iov_md": false 00:18:30.848 }, 00:18:30.848 "driver_specific": { 00:18:30.848 "lvol": { 00:18:30.848 "lvol_store_uuid": "9d97eed0-1c4c-4999-9ee3-12fa7dfc149d", 00:18:30.848 "base_bdev": "nvme0n1", 00:18:30.848 "thin_provision": true, 00:18:30.848 "num_allocated_clusters": 0, 00:18:30.848 "snapshot": false, 00:18:30.848 "clone": false, 00:18:30.848 "esnap_clone": false 00:18:30.848 } 00:18:30.848 } 00:18:30.848 } 00:18:30.848 ]' 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:30.848 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:31.108 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:31.368 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:31.369 { 00:18:31.369 "name": "d99b0ec2-bbb4-420d-ada2-c4394dfe0751", 00:18:31.369 "aliases": [ 00:18:31.369 "lvs/nvme0n1p0" 00:18:31.369 ], 00:18:31.369 "product_name": "Logical Volume", 00:18:31.369 "block_size": 4096, 00:18:31.369 "num_blocks": 26476544, 00:18:31.369 "uuid": "d99b0ec2-bbb4-420d-ada2-c4394dfe0751", 00:18:31.369 "assigned_rate_limits": { 00:18:31.369 "rw_ios_per_sec": 0, 00:18:31.369 "rw_mbytes_per_sec": 0, 00:18:31.369 "r_mbytes_per_sec": 0, 00:18:31.369 "w_mbytes_per_sec": 0 00:18:31.369 }, 00:18:31.369 "claimed": false, 00:18:31.369 "zoned": false, 00:18:31.369 "supported_io_types": { 00:18:31.369 "read": true, 00:18:31.369 "write": true, 00:18:31.369 "unmap": true, 00:18:31.369 "flush": false, 00:18:31.369 "reset": true, 00:18:31.369 "nvme_admin": false, 00:18:31.369 "nvme_io": false, 00:18:31.369 "nvme_io_md": false, 00:18:31.369 "write_zeroes": true, 00:18:31.369 "zcopy": false, 00:18:31.369 "get_zone_info": false, 00:18:31.369 "zone_management": false, 00:18:31.369 "zone_append": false, 00:18:31.369 "compare": false, 00:18:31.369 "compare_and_write": false, 00:18:31.369 "abort": false, 00:18:31.369 "seek_hole": true, 00:18:31.369 "seek_data": true, 00:18:31.369 "copy": false, 00:18:31.369 "nvme_iov_md": false 00:18:31.369 }, 00:18:31.369 "driver_specific": { 00:18:31.369 "lvol": { 00:18:31.369 "lvol_store_uuid": "9d97eed0-1c4c-4999-9ee3-12fa7dfc149d", 00:18:31.369 "base_bdev": "nvme0n1", 00:18:31.369 "thin_provision": true, 00:18:31.369 "num_allocated_clusters": 0, 00:18:31.369 "snapshot": false, 00:18:31.369 "clone": false, 00:18:31.369 "esnap_clone": false 00:18:31.369 } 00:18:31.369 } 00:18:31.369 } 00:18:31.369 ]' 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:31.369 18:13:05 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:31.628 18:13:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:31.628 18:13:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:31.628 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:31.628 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:31.628 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:31.628 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:31.628 18:13:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d99b0ec2-bbb4-420d-ada2-c4394dfe0751 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:31.888 { 00:18:31.888 "name": "d99b0ec2-bbb4-420d-ada2-c4394dfe0751", 00:18:31.888 "aliases": [ 00:18:31.888 "lvs/nvme0n1p0" 00:18:31.888 ], 00:18:31.888 "product_name": "Logical Volume", 00:18:31.888 "block_size": 4096, 00:18:31.888 "num_blocks": 26476544, 00:18:31.888 "uuid": "d99b0ec2-bbb4-420d-ada2-c4394dfe0751", 00:18:31.888 "assigned_rate_limits": { 00:18:31.888 "rw_ios_per_sec": 0, 00:18:31.888 "rw_mbytes_per_sec": 0, 00:18:31.888 "r_mbytes_per_sec": 0, 00:18:31.888 "w_mbytes_per_sec": 0 00:18:31.888 }, 00:18:31.888 "claimed": false, 00:18:31.888 "zoned": false, 00:18:31.888 "supported_io_types": { 00:18:31.888 "read": true, 00:18:31.888 "write": true, 00:18:31.888 "unmap": true, 00:18:31.888 "flush": false, 00:18:31.888 "reset": true, 00:18:31.888 "nvme_admin": false, 00:18:31.888 "nvme_io": false, 00:18:31.888 "nvme_io_md": false, 00:18:31.888 "write_zeroes": true, 00:18:31.888 "zcopy": false, 00:18:31.888 "get_zone_info": false, 00:18:31.888 "zone_management": false, 00:18:31.888 "zone_append": false, 00:18:31.888 "compare": false, 00:18:31.888 "compare_and_write": false, 00:18:31.888 "abort": false, 00:18:31.888 "seek_hole": true, 00:18:31.888 "seek_data": true, 00:18:31.888 "copy": false, 00:18:31.888 "nvme_iov_md": false 00:18:31.888 }, 00:18:31.888 "driver_specific": { 00:18:31.888 "lvol": { 00:18:31.888 "lvol_store_uuid": "9d97eed0-1c4c-4999-9ee3-12fa7dfc149d", 00:18:31.888 "base_bdev": "nvme0n1", 00:18:31.888 "thin_provision": true, 00:18:31.888 "num_allocated_clusters": 0, 00:18:31.888 "snapshot": false, 00:18:31.888 "clone": false, 00:18:31.888 "esnap_clone": false 00:18:31.888 } 00:18:31.888 } 00:18:31.888 } 00:18:31.888 ]' 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:31.888 18:13:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d99b0ec2-bbb4-420d-ada2-c4394dfe0751 -c nvc0n1p0 --l2p_dram_limit 20 00:18:32.149 [2024-12-13 18:13:06.423480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.149 [2024-12-13 18:13:06.423836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:32.149 [2024-12-13 18:13:06.423872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:32.149 [2024-12-13 18:13:06.423883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.149 [2024-12-13 18:13:06.423977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.149 [2024-12-13 18:13:06.423989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.149 [2024-12-13 18:13:06.424005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:32.149 [2024-12-13 18:13:06.424015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.149 [2024-12-13 18:13:06.424043] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:32.149 [2024-12-13 18:13:06.424409] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:32.149 [2024-12-13 18:13:06.424440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.424455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.150 [2024-12-13 18:13:06.424470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:18:32.150 [2024-12-13 18:13:06.424480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.424562] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a49f29bc-9da3-4d6c-bf0d-8ed3574f1426 00:18:32.150 [2024-12-13 18:13:06.427088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.427326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:32.150 [2024-12-13 18:13:06.427352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:32.150 [2024-12-13 18:13:06.427369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.440498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.440558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.150 [2024-12-13 18:13:06.440571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.045 ms 00:18:32.150 [2024-12-13 18:13:06.440586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.440681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.440694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.150 [2024-12-13 18:13:06.440710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:32.150 [2024-12-13 18:13:06.440723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.440784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.440804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:32.150 [2024-12-13 18:13:06.440813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:32.150 [2024-12-13 18:13:06.440825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.440854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:32.150 [2024-12-13 18:13:06.443823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.443870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.150 [2024-12-13 18:13:06.443889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:18:32.150 [2024-12-13 18:13:06.443897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.443950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.443959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:32.150 [2024-12-13 18:13:06.443975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:32.150 [2024-12-13 18:13:06.443983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.444031] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:32.150 [2024-12-13 18:13:06.444202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:32.150 [2024-12-13 18:13:06.444220] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:32.150 [2024-12-13 18:13:06.444274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:32.150 [2024-12-13 18:13:06.444293] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:32.150 [2024-12-13 18:13:06.444324] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:32.150 [2024-12-13 18:13:06.444335] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:32.150 [2024-12-13 18:13:06.444351] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:32.150 [2024-12-13 18:13:06.444363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.444373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:32.150 [2024-12-13 18:13:06.444384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:18:32.150 [2024-12-13 18:13:06.444391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.444488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.150 [2024-12-13 18:13:06.444503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:32.150 [2024-12-13 18:13:06.444515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:32.150 [2024-12-13 18:13:06.444530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.150 [2024-12-13 18:13:06.444631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:32.150 [2024-12-13 18:13:06.444644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:32.150 [2024-12-13 18:13:06.444657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:32.150 [2024-12-13 18:13:06.444697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:32.150 [2024-12-13 18:13:06.444728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:32.150 [2024-12-13 18:13:06.444746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:32.150 [2024-12-13 18:13:06.444757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:32.150 [2024-12-13 18:13:06.444772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:32.150 [2024-12-13 18:13:06.444781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:32.150 [2024-12-13 18:13:06.444792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:32.150 [2024-12-13 18:13:06.444800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:32.150 [2024-12-13 18:13:06.444823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:32.150 [2024-12-13 18:13:06.444859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:32.150 [2024-12-13 18:13:06.444883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:32.150 [2024-12-13 18:13:06.444908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:32.150 [2024-12-13 18:13:06.444935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:32.150 [2024-12-13 18:13:06.444950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:32.150 [2024-12-13 18:13:06.444961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:32.150 [2024-12-13 18:13:06.444968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:32.150 [2024-12-13 18:13:06.444979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:32.150 [2024-12-13 18:13:06.444986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:32.150 [2024-12-13 18:13:06.444995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:32.150 [2024-12-13 18:13:06.445003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:32.150 [2024-12-13 18:13:06.445012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:32.150 [2024-12-13 18:13:06.445020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.150 [2024-12-13 18:13:06.445030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:32.150 [2024-12-13 18:13:06.445036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:32.150 [2024-12-13 18:13:06.445045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.150 [2024-12-13 18:13:06.445051] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:32.150 [2024-12-13 18:13:06.445064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:32.150 [2024-12-13 18:13:06.445073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:32.150 [2024-12-13 18:13:06.445085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.150 [2024-12-13 18:13:06.445093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:32.150 [2024-12-13 18:13:06.445102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:32.150 [2024-12-13 18:13:06.445109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:32.150 [2024-12-13 18:13:06.445130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:32.150 [2024-12-13 18:13:06.445139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:32.150 [2024-12-13 18:13:06.445150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:32.150 [2024-12-13 18:13:06.445163] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:32.150 [2024-12-13 18:13:06.445176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:32.150 [2024-12-13 18:13:06.445187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:32.150 [2024-12-13 18:13:06.445200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:32.150 [2024-12-13 18:13:06.445207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:32.150 [2024-12-13 18:13:06.445218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:32.151 [2024-12-13 18:13:06.445226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:32.151 [2024-12-13 18:13:06.445239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:32.151 [2024-12-13 18:13:06.445266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:32.151 [2024-12-13 18:13:06.445284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:32.151 [2024-12-13 18:13:06.445292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:32.151 [2024-12-13 18:13:06.445304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:32.151 [2024-12-13 18:13:06.445313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:32.151 [2024-12-13 18:13:06.445325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:32.151 [2024-12-13 18:13:06.445332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:32.151 [2024-12-13 18:13:06.445343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:32.151 [2024-12-13 18:13:06.445352] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:32.151 [2024-12-13 18:13:06.445371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:32.151 [2024-12-13 18:13:06.445381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:32.151 [2024-12-13 18:13:06.445393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:32.151 [2024-12-13 18:13:06.445402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:32.151 [2024-12-13 18:13:06.445412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:32.151 [2024-12-13 18:13:06.445423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.151 [2024-12-13 18:13:06.445437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:32.151 [2024-12-13 18:13:06.445445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:18:32.151 [2024-12-13 18:13:06.445455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.151 [2024-12-13 18:13:06.445518] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:32.151 [2024-12-13 18:13:06.445533] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:36.438 [2024-12-13 18:13:10.131723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.131838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:36.438 [2024-12-13 18:13:10.131871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3686.191 ms 00:18:36.438 [2024-12-13 18:13:10.131893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.143303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.143351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.438 [2024-12-13 18:13:10.143366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.280 ms 00:18:36.438 [2024-12-13 18:13:10.143380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.143487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.143500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:36.438 [2024-12-13 18:13:10.143513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:36.438 [2024-12-13 18:13:10.143524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.162144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.162199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.438 [2024-12-13 18:13:10.162215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.568 ms 00:18:36.438 [2024-12-13 18:13:10.162228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.162294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.162314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.438 [2024-12-13 18:13:10.162325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:36.438 [2024-12-13 18:13:10.162337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.162847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.162889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.438 [2024-12-13 18:13:10.162903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:18:36.438 [2024-12-13 18:13:10.162919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.163065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.163080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.438 [2024-12-13 18:13:10.163096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:36.438 [2024-12-13 18:13:10.163109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.170131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.170169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.438 [2024-12-13 18:13:10.170179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.000 ms 00:18:36.438 [2024-12-13 18:13:10.170189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.179348] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:36.438 [2024-12-13 18:13:10.185456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.185657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:36.438 [2024-12-13 18:13:10.185678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.210 ms 00:18:36.438 [2024-12-13 18:13:10.185686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.252610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.252806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:36.438 [2024-12-13 18:13:10.252832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.893 ms 00:18:36.438 [2024-12-13 18:13:10.252844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.253061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.253074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:36.438 [2024-12-13 18:13:10.253090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:18:36.438 [2024-12-13 18:13:10.253098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.438 [2024-12-13 18:13:10.257431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.438 [2024-12-13 18:13:10.257466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:36.438 [2024-12-13 18:13:10.257479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.311 ms 00:18:36.439 [2024-12-13 18:13:10.257487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.261064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.261098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:36.439 [2024-12-13 18:13:10.261110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.535 ms 00:18:36.439 [2024-12-13 18:13:10.261118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.261465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.261478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:36.439 [2024-12-13 18:13:10.261491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:18:36.439 [2024-12-13 18:13:10.261509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.296956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.297002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:36.439 [2024-12-13 18:13:10.297017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.424 ms 00:18:36.439 [2024-12-13 18:13:10.297026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.302955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.302993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:36.439 [2024-12-13 18:13:10.303006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.864 ms 00:18:36.439 [2024-12-13 18:13:10.303014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.307445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.307481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:36.439 [2024-12-13 18:13:10.307493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.391 ms 00:18:36.439 [2024-12-13 18:13:10.307500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.312394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.312431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:36.439 [2024-12-13 18:13:10.312446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:18:36.439 [2024-12-13 18:13:10.312454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.312497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.312510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:36.439 [2024-12-13 18:13:10.312522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:36.439 [2024-12-13 18:13:10.312530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.312604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.439 [2024-12-13 18:13:10.312614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:36.439 [2024-12-13 18:13:10.312624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:36.439 [2024-12-13 18:13:10.312632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.439 [2024-12-13 18:13:10.313717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3889.731 ms, result 0 00:18:36.439 { 00:18:36.439 "name": "ftl0", 00:18:36.439 "uuid": "a49f29bc-9da3-4d6c-bf0d-8ed3574f1426" 00:18:36.439 } 00:18:36.439 18:13:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:36.439 18:13:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:36.439 18:13:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:36.439 18:13:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:36.439 [2024-12-13 18:13:10.656439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:36.439 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:36.439 Zero copy mechanism will not be used. 00:18:36.439 Running I/O for 4 seconds... 00:18:38.326 636.00 IOPS, 42.23 MiB/s [2024-12-13T18:13:14.078Z] 642.00 IOPS, 42.63 MiB/s [2024-12-13T18:13:15.014Z] 745.00 IOPS, 49.47 MiB/s [2024-12-13T18:13:15.014Z] 777.25 IOPS, 51.61 MiB/s 00:18:40.637 Latency(us) 00:18:40.637 [2024-12-13T18:13:15.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.637 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:40.637 ftl0 : 4.00 777.25 51.61 0.00 0.00 1362.78 190.62 3528.86 00:18:40.637 [2024-12-13T18:13:15.014Z] =================================================================================================================== 00:18:40.637 [2024-12-13T18:13:15.014Z] Total : 777.25 51.61 0.00 0.00 1362.78 190.62 3528.86 00:18:40.637 [2024-12-13 18:13:14.663499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:40.637 { 00:18:40.637 "results": [ 00:18:40.637 { 00:18:40.637 "job": "ftl0", 00:18:40.637 "core_mask": "0x1", 00:18:40.637 "workload": "randwrite", 00:18:40.637 "status": "finished", 00:18:40.637 "queue_depth": 1, 00:18:40.637 "io_size": 69632, 00:18:40.637 "runtime": 4.001305, 00:18:40.637 "iops": 777.2464233543807, 00:18:40.637 "mibps": 51.614020300876845, 00:18:40.637 "io_failed": 0, 00:18:40.637 "io_timeout": 0, 00:18:40.637 "avg_latency_us": 1362.7783566658422, 00:18:40.637 "min_latency_us": 190.62153846153845, 00:18:40.637 "max_latency_us": 3528.8615384615387 00:18:40.637 } 00:18:40.637 ], 00:18:40.637 "core_count": 1 00:18:40.637 } 00:18:40.637 18:13:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:40.637 [2024-12-13 18:13:14.764097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:40.637 Running I/O for 4 seconds... 00:18:42.520 7963.00 IOPS, 31.11 MiB/s [2024-12-13T18:13:17.841Z] 7083.00 IOPS, 27.67 MiB/s [2024-12-13T18:13:18.784Z] 6656.67 IOPS, 26.00 MiB/s [2024-12-13T18:13:19.046Z] 6279.00 IOPS, 24.53 MiB/s 00:18:44.669 Latency(us) 00:18:44.669 [2024-12-13T18:13:19.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.669 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.669 ftl0 : 4.03 6258.47 24.45 0.00 0.00 20369.55 305.62 44161.18 00:18:44.669 [2024-12-13T18:13:19.046Z] =================================================================================================================== 00:18:44.669 [2024-12-13T18:13:19.046Z] Total : 6258.47 24.45 0.00 0.00 20369.55 0.00 44161.18 00:18:44.669 { 00:18:44.669 "results": [ 00:18:44.669 { 00:18:44.669 "job": "ftl0", 00:18:44.669 "core_mask": "0x1", 00:18:44.669 "workload": "randwrite", 00:18:44.669 "status": "finished", 00:18:44.669 "queue_depth": 128, 00:18:44.669 "io_size": 4096, 00:18:44.669 "runtime": 4.033575, 00:18:44.669 "iops": 6258.467984356309, 00:18:44.669 "mibps": 24.447140563891832, 00:18:44.669 "io_failed": 0, 00:18:44.669 "io_timeout": 0, 00:18:44.669 "avg_latency_us": 20369.545749424084, 00:18:44.669 "min_latency_us": 305.62461538461537, 00:18:44.669 "max_latency_us": 44161.18153846154 00:18:44.669 } 00:18:44.669 ], 00:18:44.669 "core_count": 1 00:18:44.669 } 00:18:44.669 [2024-12-13 18:13:18.803828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:44.669 18:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:44.669 [2024-12-13 18:13:18.908143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:44.669 Running I/O for 4 seconds... 00:18:46.555 4515.00 IOPS, 17.64 MiB/s [2024-12-13T18:13:22.319Z] 4446.50 IOPS, 17.37 MiB/s [2024-12-13T18:13:23.262Z] 4424.33 IOPS, 17.28 MiB/s [2024-12-13T18:13:23.262Z] 4384.75 IOPS, 17.13 MiB/s 00:18:48.885 Latency(us) 00:18:48.885 [2024-12-13T18:13:23.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.885 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:48.885 Verification LBA range: start 0x0 length 0x1400000 00:18:48.885 ftl0 : 4.02 4399.61 17.19 0.00 0.00 29002.96 400.15 124215.93 00:18:48.885 [2024-12-13T18:13:23.262Z] =================================================================================================================== 00:18:48.885 [2024-12-13T18:13:23.262Z] Total : 4399.61 17.19 0.00 0.00 29002.96 0.00 124215.93 00:18:48.885 [2024-12-13 18:13:22.931508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:48.885 { 00:18:48.885 "results": [ 00:18:48.885 { 00:18:48.885 "job": "ftl0", 00:18:48.885 "core_mask": "0x1", 00:18:48.885 "workload": "verify", 00:18:48.885 "status": "finished", 00:18:48.885 "verify_range": { 00:18:48.885 "start": 0, 00:18:48.885 "length": 20971520 00:18:48.885 }, 00:18:48.885 "queue_depth": 128, 00:18:48.885 "io_size": 4096, 00:18:48.885 "runtime": 4.01558, 00:18:48.885 "iops": 4399.613505396481, 00:18:48.885 "mibps": 17.185990255455003, 00:18:48.885 "io_failed": 0, 00:18:48.885 "io_timeout": 0, 00:18:48.885 "avg_latency_us": 29002.961003522432, 00:18:48.885 "min_latency_us": 400.1476923076923, 00:18:48.885 "max_latency_us": 124215.92615384616 00:18:48.885 } 00:18:48.885 ], 00:18:48.885 "core_count": 1 00:18:48.885 } 00:18:48.885 18:13:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:48.885 [2024-12-13 18:13:23.139898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.885 [2024-12-13 18:13:23.140188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:48.885 [2024-12-13 18:13:23.140489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:48.885 [2024-12-13 18:13:23.140539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.885 [2024-12-13 18:13:23.140603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:48.885 [2024-12-13 18:13:23.141407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.885 [2024-12-13 18:13:23.141607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:48.885 [2024-12-13 18:13:23.141817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:18:48.885 [2024-12-13 18:13:23.141870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.885 [2024-12-13 18:13:23.145292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.885 [2024-12-13 18:13:23.145479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:48.885 [2024-12-13 18:13:23.145572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:18:48.885 [2024-12-13 18:13:23.145610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.366033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.366285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:49.148 [2024-12-13 18:13:23.366397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 220.382 ms 00:18:49.148 [2024-12-13 18:13:23.366447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.373080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.373297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:49.148 [2024-12-13 18:13:23.373394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.571 ms 00:18:49.148 [2024-12-13 18:13:23.373431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.376401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.376606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:49.148 [2024-12-13 18:13:23.376815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:18:49.148 [2024-12-13 18:13:23.376867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.383832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.384046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:49.148 [2024-12-13 18:13:23.384137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:18:49.148 [2024-12-13 18:13:23.384180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.384379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.384445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:49.148 [2024-12-13 18:13:23.384473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:18:49.148 [2024-12-13 18:13:23.384621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.388134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.388379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:49.148 [2024-12-13 18:13:23.388406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:18:49.148 [2024-12-13 18:13:23.388421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.391868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.392072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:49.148 [2024-12-13 18:13:23.392094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.392 ms 00:18:49.148 [2024-12-13 18:13:23.392104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.394725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.394789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:49.148 [2024-12-13 18:13:23.394799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:18:49.148 [2024-12-13 18:13:23.394812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.397214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.148 [2024-12-13 18:13:23.397428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:49.148 [2024-12-13 18:13:23.397537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:18:49.148 [2024-12-13 18:13:23.397578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.148 [2024-12-13 18:13:23.397632] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:49.148 [2024-12-13 18:13:23.397670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.397990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:49.148 [2024-12-13 18:13:23.398463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:49.149 [2024-12-13 18:13:23.398883] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:49.149 [2024-12-13 18:13:23.398898] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a49f29bc-9da3-4d6c-bf0d-8ed3574f1426 00:18:49.149 [2024-12-13 18:13:23.398919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:49.149 [2024-12-13 18:13:23.398928] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:49.149 [2024-12-13 18:13:23.398943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:49.149 [2024-12-13 18:13:23.398952] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:49.149 [2024-12-13 18:13:23.398964] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:49.149 [2024-12-13 18:13:23.398973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:49.149 [2024-12-13 18:13:23.398983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:49.149 [2024-12-13 18:13:23.398989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:49.149 [2024-12-13 18:13:23.398998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:49.149 [2024-12-13 18:13:23.399006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.149 [2024-12-13 18:13:23.399020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:49.149 [2024-12-13 18:13:23.399033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:18:49.149 [2024-12-13 18:13:23.399047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.401718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.149 [2024-12-13 18:13:23.401756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:49.149 [2024-12-13 18:13:23.401767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:18:49.149 [2024-12-13 18:13:23.401778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.401922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.149 [2024-12-13 18:13:23.401937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:49.149 [2024-12-13 18:13:23.401948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:49.149 [2024-12-13 18:13:23.401960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.410211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.410457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.149 [2024-12-13 18:13:23.410480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.410491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.410559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.410572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.149 [2024-12-13 18:13:23.410580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.410591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.410675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.410689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.149 [2024-12-13 18:13:23.410698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.410709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.410730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.410740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.149 [2024-12-13 18:13:23.410750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.410768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.424835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.424896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.149 [2024-12-13 18:13:23.424907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.424918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.436160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.436230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.149 [2024-12-13 18:13:23.436283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.436294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.436368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.436381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.149 [2024-12-13 18:13:23.436396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.436406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.436447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.436458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.149 [2024-12-13 18:13:23.436467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.436482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.436558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.436571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.149 [2024-12-13 18:13:23.436580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.149 [2024-12-13 18:13:23.436590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.149 [2024-12-13 18:13:23.436619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.149 [2024-12-13 18:13:23.436631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:49.150 [2024-12-13 18:13:23.436639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.150 [2024-12-13 18:13:23.436648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.150 [2024-12-13 18:13:23.436687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.150 [2024-12-13 18:13:23.436698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.150 [2024-12-13 18:13:23.436706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.150 [2024-12-13 18:13:23.436719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.150 [2024-12-13 18:13:23.436763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.150 [2024-12-13 18:13:23.436777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.150 [2024-12-13 18:13:23.436787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.150 [2024-12-13 18:13:23.436801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.150 [2024-12-13 18:13:23.436965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 297.008 ms, result 0 00:18:49.150 true 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88655 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88655 ']' 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88655 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88655 00:18:49.150 killing process with pid 88655 00:18:49.150 Received shutdown signal, test time was about 4.000000 seconds 00:18:49.150 00:18:49.150 Latency(us) 00:18:49.150 [2024-12-13T18:13:23.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.150 [2024-12-13T18:13:23.527Z] =================================================================================================================== 00:18:49.150 [2024-12-13T18:13:23.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88655' 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88655 00:18:49.150 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88655 00:18:49.411 Remove shared memory files 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:49.411 ************************************ 00:18:49.411 END TEST ftl_bdevperf 00:18:49.411 ************************************ 00:18:49.411 00:18:49.411 real 0m21.422s 00:18:49.411 user 0m24.169s 00:18:49.411 sys 0m0.920s 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.411 18:13:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.411 18:13:23 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:49.411 18:13:23 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:49.411 18:13:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.411 18:13:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:49.672 ************************************ 00:18:49.672 START TEST ftl_trim 00:18:49.672 ************************************ 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:49.672 * Looking for test storage... 00:18:49.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.672 18:13:23 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:49.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.672 --rc genhtml_branch_coverage=1 00:18:49.672 --rc genhtml_function_coverage=1 00:18:49.672 --rc genhtml_legend=1 00:18:49.672 --rc geninfo_all_blocks=1 00:18:49.672 --rc geninfo_unexecuted_blocks=1 00:18:49.672 00:18:49.672 ' 00:18:49.672 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:49.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.672 --rc genhtml_branch_coverage=1 00:18:49.672 --rc genhtml_function_coverage=1 00:18:49.672 --rc genhtml_legend=1 00:18:49.672 --rc geninfo_all_blocks=1 00:18:49.672 --rc geninfo_unexecuted_blocks=1 00:18:49.672 00:18:49.673 ' 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.673 --rc genhtml_branch_coverage=1 00:18:49.673 --rc genhtml_function_coverage=1 00:18:49.673 --rc genhtml_legend=1 00:18:49.673 --rc geninfo_all_blocks=1 00:18:49.673 --rc geninfo_unexecuted_blocks=1 00:18:49.673 00:18:49.673 ' 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.673 --rc genhtml_branch_coverage=1 00:18:49.673 --rc genhtml_function_coverage=1 00:18:49.673 --rc genhtml_legend=1 00:18:49.673 --rc geninfo_all_blocks=1 00:18:49.673 --rc geninfo_unexecuted_blocks=1 00:18:49.673 00:18:49.673 ' 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=88996 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 88996 00:18:49.673 18:13:23 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88996 ']' 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.673 18:13:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:49.934 [2024-12-13 18:13:24.052505] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:49.934 [2024-12-13 18:13:24.052911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88996 ] 00:18:49.934 [2024-12-13 18:13:24.200928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:49.934 [2024-12-13 18:13:24.233126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.934 [2024-12-13 18:13:24.233515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.934 [2024-12-13 18:13:24.233417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.884 18:13:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.884 18:13:24 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:50.884 18:13:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:50.884 18:13:24 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:50.884 18:13:24 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:50.884 18:13:24 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:50.884 18:13:24 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:50.884 18:13:24 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:50.884 18:13:25 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:50.884 18:13:25 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:50.884 18:13:25 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:50.884 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:50.884 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:50.884 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:50.884 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:50.884 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:51.146 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:51.146 { 00:18:51.146 "name": "nvme0n1", 00:18:51.146 "aliases": [ 00:18:51.146 "63913086-05e4-4afb-81c3-b5fe2766d307" 00:18:51.146 ], 00:18:51.146 "product_name": "NVMe disk", 00:18:51.146 "block_size": 4096, 00:18:51.146 "num_blocks": 1310720, 00:18:51.146 "uuid": "63913086-05e4-4afb-81c3-b5fe2766d307", 00:18:51.146 "numa_id": -1, 00:18:51.146 "assigned_rate_limits": { 00:18:51.146 "rw_ios_per_sec": 0, 00:18:51.146 "rw_mbytes_per_sec": 0, 00:18:51.146 "r_mbytes_per_sec": 0, 00:18:51.146 "w_mbytes_per_sec": 0 00:18:51.146 }, 00:18:51.146 "claimed": true, 00:18:51.146 "claim_type": "read_many_write_one", 00:18:51.146 "zoned": false, 00:18:51.146 "supported_io_types": { 00:18:51.146 "read": true, 00:18:51.146 "write": true, 00:18:51.146 "unmap": true, 00:18:51.146 "flush": true, 00:18:51.146 "reset": true, 00:18:51.146 "nvme_admin": true, 00:18:51.146 "nvme_io": true, 00:18:51.146 "nvme_io_md": false, 00:18:51.146 "write_zeroes": true, 00:18:51.146 "zcopy": false, 00:18:51.146 "get_zone_info": false, 00:18:51.146 "zone_management": false, 00:18:51.146 "zone_append": false, 00:18:51.146 "compare": true, 00:18:51.146 "compare_and_write": false, 00:18:51.146 "abort": true, 00:18:51.146 "seek_hole": false, 00:18:51.146 "seek_data": false, 00:18:51.146 "copy": true, 00:18:51.146 "nvme_iov_md": false 00:18:51.146 }, 00:18:51.146 "driver_specific": { 00:18:51.146 "nvme": [ 00:18:51.146 { 00:18:51.146 "pci_address": "0000:00:11.0", 00:18:51.146 "trid": { 00:18:51.146 "trtype": "PCIe", 00:18:51.146 "traddr": "0000:00:11.0" 00:18:51.146 }, 00:18:51.146 "ctrlr_data": { 00:18:51.146 "cntlid": 0, 00:18:51.146 "vendor_id": "0x1b36", 00:18:51.146 "model_number": "QEMU NVMe Ctrl", 00:18:51.146 "serial_number": "12341", 00:18:51.146 "firmware_revision": "8.0.0", 00:18:51.146 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:51.146 "oacs": { 00:18:51.146 "security": 0, 00:18:51.146 "format": 1, 00:18:51.146 "firmware": 0, 00:18:51.146 "ns_manage": 1 00:18:51.146 }, 00:18:51.146 "multi_ctrlr": false, 00:18:51.146 "ana_reporting": false 00:18:51.146 }, 00:18:51.146 "vs": { 00:18:51.146 "nvme_version": "1.4" 00:18:51.146 }, 00:18:51.146 "ns_data": { 00:18:51.146 "id": 1, 00:18:51.146 "can_share": false 00:18:51.146 } 00:18:51.146 } 00:18:51.146 ], 00:18:51.146 "mp_policy": "active_passive" 00:18:51.146 } 00:18:51.146 } 00:18:51.146 ]' 00:18:51.146 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:51.146 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:51.147 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:51.147 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:51.147 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:51.147 18:13:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:51.147 18:13:25 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:51.147 18:13:25 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:51.147 18:13:25 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:51.147 18:13:25 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:51.147 18:13:25 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:51.407 18:13:25 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9d97eed0-1c4c-4999-9ee3-12fa7dfc149d 00:18:51.407 18:13:25 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:51.407 18:13:25 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d97eed0-1c4c-4999-9ee3-12fa7dfc149d 00:18:51.667 18:13:25 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:51.928 18:13:26 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=0165edb5-731e-46a6-a4af-eaa2b79cf9d8 00:18:51.928 18:13:26 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0165edb5-731e-46a6-a4af-eaa2b79cf9d8 00:18:52.189 18:13:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.189 18:13:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.189 18:13:26 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:52.189 18:13:26 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:52.189 18:13:26 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.189 18:13:26 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:52.189 18:13:26 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.189 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.189 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:52.189 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:52.189 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:52.189 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.449 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:52.449 { 00:18:52.449 "name": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:52.449 "aliases": [ 00:18:52.449 "lvs/nvme0n1p0" 00:18:52.450 ], 00:18:52.450 "product_name": "Logical Volume", 00:18:52.450 "block_size": 4096, 00:18:52.450 "num_blocks": 26476544, 00:18:52.450 "uuid": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:52.450 "assigned_rate_limits": { 00:18:52.450 "rw_ios_per_sec": 0, 00:18:52.450 "rw_mbytes_per_sec": 0, 00:18:52.450 "r_mbytes_per_sec": 0, 00:18:52.450 "w_mbytes_per_sec": 0 00:18:52.450 }, 00:18:52.450 "claimed": false, 00:18:52.450 "zoned": false, 00:18:52.450 "supported_io_types": { 00:18:52.450 "read": true, 00:18:52.450 "write": true, 00:18:52.450 "unmap": true, 00:18:52.450 "flush": false, 00:18:52.450 "reset": true, 00:18:52.450 "nvme_admin": false, 00:18:52.450 "nvme_io": false, 00:18:52.450 "nvme_io_md": false, 00:18:52.450 "write_zeroes": true, 00:18:52.450 "zcopy": false, 00:18:52.450 "get_zone_info": false, 00:18:52.450 "zone_management": false, 00:18:52.450 "zone_append": false, 00:18:52.450 "compare": false, 00:18:52.450 "compare_and_write": false, 00:18:52.450 "abort": false, 00:18:52.450 "seek_hole": true, 00:18:52.450 "seek_data": true, 00:18:52.450 "copy": false, 00:18:52.450 "nvme_iov_md": false 00:18:52.450 }, 00:18:52.450 "driver_specific": { 00:18:52.450 "lvol": { 00:18:52.450 "lvol_store_uuid": "0165edb5-731e-46a6-a4af-eaa2b79cf9d8", 00:18:52.450 "base_bdev": "nvme0n1", 00:18:52.450 "thin_provision": true, 00:18:52.450 "num_allocated_clusters": 0, 00:18:52.450 "snapshot": false, 00:18:52.450 "clone": false, 00:18:52.450 "esnap_clone": false 00:18:52.450 } 00:18:52.450 } 00:18:52.450 } 00:18:52.450 ]' 00:18:52.450 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:52.450 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:52.450 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:52.450 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:52.450 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:52.450 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:52.450 18:13:26 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:52.450 18:13:26 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:52.450 18:13:26 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:52.711 18:13:26 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:52.711 18:13:26 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:52.711 18:13:26 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.711 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.711 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:52.711 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:52.711 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:52.711 18:13:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:52.972 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:52.972 { 00:18:52.972 "name": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:52.972 "aliases": [ 00:18:52.972 "lvs/nvme0n1p0" 00:18:52.972 ], 00:18:52.972 "product_name": "Logical Volume", 00:18:52.972 "block_size": 4096, 00:18:52.972 "num_blocks": 26476544, 00:18:52.972 "uuid": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:52.972 "assigned_rate_limits": { 00:18:52.972 "rw_ios_per_sec": 0, 00:18:52.972 "rw_mbytes_per_sec": 0, 00:18:52.972 "r_mbytes_per_sec": 0, 00:18:52.972 "w_mbytes_per_sec": 0 00:18:52.972 }, 00:18:52.972 "claimed": false, 00:18:52.972 "zoned": false, 00:18:52.972 "supported_io_types": { 00:18:52.972 "read": true, 00:18:52.972 "write": true, 00:18:52.972 "unmap": true, 00:18:52.972 "flush": false, 00:18:52.972 "reset": true, 00:18:52.972 "nvme_admin": false, 00:18:52.972 "nvme_io": false, 00:18:52.972 "nvme_io_md": false, 00:18:52.972 "write_zeroes": true, 00:18:52.972 "zcopy": false, 00:18:52.972 "get_zone_info": false, 00:18:52.972 "zone_management": false, 00:18:52.972 "zone_append": false, 00:18:52.972 "compare": false, 00:18:52.972 "compare_and_write": false, 00:18:52.972 "abort": false, 00:18:52.972 "seek_hole": true, 00:18:52.972 "seek_data": true, 00:18:52.972 "copy": false, 00:18:52.972 "nvme_iov_md": false 00:18:52.972 }, 00:18:52.972 "driver_specific": { 00:18:52.972 "lvol": { 00:18:52.972 "lvol_store_uuid": "0165edb5-731e-46a6-a4af-eaa2b79cf9d8", 00:18:52.972 "base_bdev": "nvme0n1", 00:18:52.972 "thin_provision": true, 00:18:52.972 "num_allocated_clusters": 0, 00:18:52.972 "snapshot": false, 00:18:52.972 "clone": false, 00:18:52.972 "esnap_clone": false 00:18:52.972 } 00:18:52.972 } 00:18:52.972 } 00:18:52.972 ]' 00:18:52.972 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:52.972 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:52.972 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:52.972 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:52.972 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:52.972 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:52.972 18:13:27 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:52.972 18:13:27 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:53.233 18:13:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:53.234 18:13:27 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:53.234 18:13:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:53.234 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:53.234 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:53.234 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:53.234 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:53.234 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9a5f7ec-c754-4b55-8b2a-7c0347491351 00:18:53.495 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:53.495 { 00:18:53.495 "name": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:53.495 "aliases": [ 00:18:53.495 "lvs/nvme0n1p0" 00:18:53.495 ], 00:18:53.495 "product_name": "Logical Volume", 00:18:53.495 "block_size": 4096, 00:18:53.495 "num_blocks": 26476544, 00:18:53.495 "uuid": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:53.495 "assigned_rate_limits": { 00:18:53.495 "rw_ios_per_sec": 0, 00:18:53.495 "rw_mbytes_per_sec": 0, 00:18:53.495 "r_mbytes_per_sec": 0, 00:18:53.495 "w_mbytes_per_sec": 0 00:18:53.495 }, 00:18:53.495 "claimed": false, 00:18:53.495 "zoned": false, 00:18:53.495 "supported_io_types": { 00:18:53.495 "read": true, 00:18:53.495 "write": true, 00:18:53.495 "unmap": true, 00:18:53.495 "flush": false, 00:18:53.495 "reset": true, 00:18:53.495 "nvme_admin": false, 00:18:53.495 "nvme_io": false, 00:18:53.495 "nvme_io_md": false, 00:18:53.495 "write_zeroes": true, 00:18:53.495 "zcopy": false, 00:18:53.495 "get_zone_info": false, 00:18:53.495 "zone_management": false, 00:18:53.495 "zone_append": false, 00:18:53.495 "compare": false, 00:18:53.495 "compare_and_write": false, 00:18:53.495 "abort": false, 00:18:53.495 "seek_hole": true, 00:18:53.495 "seek_data": true, 00:18:53.495 "copy": false, 00:18:53.495 "nvme_iov_md": false 00:18:53.495 }, 00:18:53.495 "driver_specific": { 00:18:53.495 "lvol": { 00:18:53.495 "lvol_store_uuid": "0165edb5-731e-46a6-a4af-eaa2b79cf9d8", 00:18:53.495 "base_bdev": "nvme0n1", 00:18:53.495 "thin_provision": true, 00:18:53.495 "num_allocated_clusters": 0, 00:18:53.495 "snapshot": false, 00:18:53.495 "clone": false, 00:18:53.495 "esnap_clone": false 00:18:53.495 } 00:18:53.495 } 00:18:53.495 } 00:18:53.495 ]' 00:18:53.495 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:53.495 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:53.495 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:53.495 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:53.495 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:53.495 18:13:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:53.495 18:13:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:53.495 18:13:27 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b9a5f7ec-c754-4b55-8b2a-7c0347491351 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:53.757 [2024-12-13 18:13:27.942223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.757 [2024-12-13 18:13:27.942323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.757 [2024-12-13 18:13:27.942339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:53.757 [2024-12-13 18:13:27.942353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.757 [2024-12-13 18:13:27.945304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.757 [2024-12-13 18:13:27.945544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.757 [2024-12-13 18:13:27.945570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:18:53.757 [2024-12-13 18:13:27.945585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.757 [2024-12-13 18:13:27.945868] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.757 [2024-12-13 18:13:27.946211] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.757 [2024-12-13 18:13:27.946240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.757 [2024-12-13 18:13:27.946286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.757 [2024-12-13 18:13:27.946301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:18:53.757 [2024-12-13 18:13:27.946312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.757 [2024-12-13 18:13:27.946431] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:18:53.757 [2024-12-13 18:13:27.948329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.757 [2024-12-13 18:13:27.948388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:53.757 [2024-12-13 18:13:27.948402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:53.757 [2024-12-13 18:13:27.948410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.757 [2024-12-13 18:13:27.957525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.758 [2024-12-13 18:13:27.957575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.758 [2024-12-13 18:13:27.957589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.021 ms 00:18:53.758 [2024-12-13 18:13:27.957598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.758 [2024-12-13 18:13:27.957758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.758 [2024-12-13 18:13:27.957781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.758 [2024-12-13 18:13:27.957795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:53.758 [2024-12-13 18:13:27.957803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.758 [2024-12-13 18:13:27.957854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.758 [2024-12-13 18:13:27.957862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.758 [2024-12-13 18:13:27.957875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:53.758 [2024-12-13 18:13:27.957883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.758 [2024-12-13 18:13:27.957926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:53.758 [2024-12-13 18:13:27.960148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.758 [2024-12-13 18:13:27.960401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.758 [2024-12-13 18:13:27.960425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:18:53.758 [2024-12-13 18:13:27.960436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.758 [2024-12-13 18:13:27.960495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.758 [2024-12-13 18:13:27.960506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.758 [2024-12-13 18:13:27.960514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:53.758 [2024-12-13 18:13:27.960527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.758 [2024-12-13 18:13:27.960558] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:53.758 [2024-12-13 18:13:27.960743] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.758 [2024-12-13 18:13:27.960767] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.758 [2024-12-13 18:13:27.960801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:53.758 [2024-12-13 18:13:27.960812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.758 [2024-12-13 18:13:27.960824] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.758 [2024-12-13 18:13:27.960832] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:53.758 [2024-12-13 18:13:27.960842] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.758 [2024-12-13 18:13:27.960849] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.758 [2024-12-13 18:13:27.960862] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.758 [2024-12-13 18:13:27.960870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.758 [2024-12-13 18:13:27.960879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.758 [2024-12-13 18:13:27.960887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:53.758 [2024-12-13 18:13:27.960897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.758 [2024-12-13 18:13:27.961002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.758 [2024-12-13 18:13:27.961016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.758 [2024-12-13 18:13:27.961024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:53.758 [2024-12-13 18:13:27.961034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.758 [2024-12-13 18:13:27.961162] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.758 [2024-12-13 18:13:27.961176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.758 [2024-12-13 18:13:27.961197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.758 [2024-12-13 18:13:27.961226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.758 [2024-12-13 18:13:27.961268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.758 [2024-12-13 18:13:27.961285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.758 [2024-12-13 18:13:27.961295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:53.758 [2024-12-13 18:13:27.961304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.758 [2024-12-13 18:13:27.961316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.758 [2024-12-13 18:13:27.961323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:53.758 [2024-12-13 18:13:27.961333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.758 [2024-12-13 18:13:27.961351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.758 [2024-12-13 18:13:27.961373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.758 [2024-12-13 18:13:27.961413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.758 [2024-12-13 18:13:27.961439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.758 [2024-12-13 18:13:27.961466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.758 [2024-12-13 18:13:27.961488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.758 [2024-12-13 18:13:27.961504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.758 [2024-12-13 18:13:27.961513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:53.758 [2024-12-13 18:13:27.961520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.758 [2024-12-13 18:13:27.961529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.758 [2024-12-13 18:13:27.961536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:53.758 [2024-12-13 18:13:27.961544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.758 [2024-12-13 18:13:27.961559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:53.758 [2024-12-13 18:13:27.961566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961575] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.758 [2024-12-13 18:13:27.961583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.758 [2024-12-13 18:13:27.961595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.758 [2024-12-13 18:13:27.961612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.758 [2024-12-13 18:13:27.961619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.758 [2024-12-13 18:13:27.961629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.758 [2024-12-13 18:13:27.961636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.758 [2024-12-13 18:13:27.961644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.758 [2024-12-13 18:13:27.961651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.758 [2024-12-13 18:13:27.961662] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.758 [2024-12-13 18:13:27.961672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.758 [2024-12-13 18:13:27.961682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:53.758 [2024-12-13 18:13:27.961690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:53.758 [2024-12-13 18:13:27.961700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:53.758 [2024-12-13 18:13:27.961707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:53.758 [2024-12-13 18:13:27.961717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:53.758 [2024-12-13 18:13:27.961724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:53.758 [2024-12-13 18:13:27.961735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:53.758 [2024-12-13 18:13:27.961742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:53.758 [2024-12-13 18:13:27.961752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:53.758 [2024-12-13 18:13:27.961759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:53.759 [2024-12-13 18:13:27.961769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:53.759 [2024-12-13 18:13:27.961776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:53.759 [2024-12-13 18:13:27.961784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:53.759 [2024-12-13 18:13:27.961792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:53.759 [2024-12-13 18:13:27.961801] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.759 [2024-12-13 18:13:27.961812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.759 [2024-12-13 18:13:27.961822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.759 [2024-12-13 18:13:27.961829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.759 [2024-12-13 18:13:27.961839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.759 [2024-12-13 18:13:27.961847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.759 [2024-12-13 18:13:27.961858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.759 [2024-12-13 18:13:27.961865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.759 [2024-12-13 18:13:27.961877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:18:53.759 [2024-12-13 18:13:27.961885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.759 [2024-12-13 18:13:27.962008] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:53.759 [2024-12-13 18:13:27.962019] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:56.301 [2024-12-13 18:13:30.621530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.621773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:56.301 [2024-12-13 18:13:30.621801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2659.509 ms 00:18:56.301 [2024-12-13 18:13:30.621810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.629874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.629913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.301 [2024-12-13 18:13:30.629926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.969 ms 00:18:56.301 [2024-12-13 18:13:30.629934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.630060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.630070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:56.301 [2024-12-13 18:13:30.630083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:56.301 [2024-12-13 18:13:30.630090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.646324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.646366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.301 [2024-12-13 18:13:30.646382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.203 ms 00:18:56.301 [2024-12-13 18:13:30.646391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.646476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.646491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.301 [2024-12-13 18:13:30.646502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:56.301 [2024-12-13 18:13:30.646511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.646840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.646865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.301 [2024-12-13 18:13:30.646878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:56.301 [2024-12-13 18:13:30.646886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.647036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.647069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.301 [2024-12-13 18:13:30.647084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:56.301 [2024-12-13 18:13:30.647093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.652745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.652886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.301 [2024-12-13 18:13:30.652905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.615 ms 00:18:56.301 [2024-12-13 18:13:30.652914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.301 [2024-12-13 18:13:30.661059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:56.301 [2024-12-13 18:13:30.674828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.301 [2024-12-13 18:13:30.674956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:56.301 [2024-12-13 18:13:30.674972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.834 ms 00:18:56.301 [2024-12-13 18:13:30.674981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.728039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.728083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:56.561 [2024-12-13 18:13:30.728095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.978 ms 00:18:56.561 [2024-12-13 18:13:30.728120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.728325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.728339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:56.561 [2024-12-13 18:13:30.728348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:18:56.561 [2024-12-13 18:13:30.728357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.731293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.731327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:56.561 [2024-12-13 18:13:30.731336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:18:56.561 [2024-12-13 18:13:30.731346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.734305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.734338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:56.561 [2024-12-13 18:13:30.734348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.908 ms 00:18:56.561 [2024-12-13 18:13:30.734356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.734649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.734665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:56.561 [2024-12-13 18:13:30.734673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:56.561 [2024-12-13 18:13:30.734683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.760456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.760492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:56.561 [2024-12-13 18:13:30.760502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.747 ms 00:18:56.561 [2024-12-13 18:13:30.760515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.764020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.764071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:56.561 [2024-12-13 18:13:30.764081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:18:56.561 [2024-12-13 18:13:30.764090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.767144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.767179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:56.561 [2024-12-13 18:13:30.767189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:18:56.561 [2024-12-13 18:13:30.767199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.770630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.770668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:56.561 [2024-12-13 18:13:30.770677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.389 ms 00:18:56.561 [2024-12-13 18:13:30.770689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.770732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.770752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:56.561 [2024-12-13 18:13:30.770761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:56.561 [2024-12-13 18:13:30.770771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.770836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.561 [2024-12-13 18:13:30.770847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:56.561 [2024-12-13 18:13:30.770854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:56.561 [2024-12-13 18:13:30.770863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.561 [2024-12-13 18:13:30.771724] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:56.561 [2024-12-13 18:13:30.772685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2829.239 ms, result 0 00:18:56.561 [2024-12-13 18:13:30.773371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:56.561 { 00:18:56.561 "name": "ftl0", 00:18:56.561 "uuid": "eca19c9b-a273-421a-bcd8-19fa71a11ad2" 00:18:56.561 } 00:18:56.561 18:13:30 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:56.561 18:13:30 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:56.561 18:13:30 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:56.561 18:13:30 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:18:56.561 18:13:30 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:56.561 18:13:30 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:56.561 18:13:30 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:56.820 18:13:30 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:56.820 [ 00:18:56.820 { 00:18:56.820 "name": "ftl0", 00:18:56.820 "aliases": [ 00:18:56.820 "eca19c9b-a273-421a-bcd8-19fa71a11ad2" 00:18:56.820 ], 00:18:56.820 "product_name": "FTL disk", 00:18:56.820 "block_size": 4096, 00:18:56.820 "num_blocks": 23592960, 00:18:56.820 "uuid": "eca19c9b-a273-421a-bcd8-19fa71a11ad2", 00:18:56.820 "assigned_rate_limits": { 00:18:56.820 "rw_ios_per_sec": 0, 00:18:56.820 "rw_mbytes_per_sec": 0, 00:18:56.821 "r_mbytes_per_sec": 0, 00:18:56.821 "w_mbytes_per_sec": 0 00:18:56.821 }, 00:18:56.821 "claimed": false, 00:18:56.821 "zoned": false, 00:18:56.821 "supported_io_types": { 00:18:56.821 "read": true, 00:18:56.821 "write": true, 00:18:56.821 "unmap": true, 00:18:56.821 "flush": true, 00:18:56.821 "reset": false, 00:18:56.821 "nvme_admin": false, 00:18:56.821 "nvme_io": false, 00:18:56.821 "nvme_io_md": false, 00:18:56.821 "write_zeroes": true, 00:18:56.821 "zcopy": false, 00:18:56.821 "get_zone_info": false, 00:18:56.821 "zone_management": false, 00:18:56.821 "zone_append": false, 00:18:56.821 "compare": false, 00:18:56.821 "compare_and_write": false, 00:18:56.821 "abort": false, 00:18:56.821 "seek_hole": false, 00:18:56.821 "seek_data": false, 00:18:56.821 "copy": false, 00:18:56.821 "nvme_iov_md": false 00:18:56.821 }, 00:18:56.821 "driver_specific": { 00:18:56.821 "ftl": { 00:18:56.821 "base_bdev": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:56.821 "cache": "nvc0n1p0" 00:18:56.821 } 00:18:56.821 } 00:18:56.821 } 00:18:56.821 ] 00:18:56.821 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:18:57.080 18:13:31 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:57.080 18:13:31 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:57.080 18:13:31 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:57.080 18:13:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:57.347 18:13:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:57.347 { 00:18:57.347 "name": "ftl0", 00:18:57.347 "aliases": [ 00:18:57.347 "eca19c9b-a273-421a-bcd8-19fa71a11ad2" 00:18:57.347 ], 00:18:57.347 "product_name": "FTL disk", 00:18:57.347 "block_size": 4096, 00:18:57.347 "num_blocks": 23592960, 00:18:57.347 "uuid": "eca19c9b-a273-421a-bcd8-19fa71a11ad2", 00:18:57.347 "assigned_rate_limits": { 00:18:57.347 "rw_ios_per_sec": 0, 00:18:57.347 "rw_mbytes_per_sec": 0, 00:18:57.347 "r_mbytes_per_sec": 0, 00:18:57.347 "w_mbytes_per_sec": 0 00:18:57.347 }, 00:18:57.347 "claimed": false, 00:18:57.347 "zoned": false, 00:18:57.347 "supported_io_types": { 00:18:57.347 "read": true, 00:18:57.347 "write": true, 00:18:57.347 "unmap": true, 00:18:57.347 "flush": true, 00:18:57.347 "reset": false, 00:18:57.347 "nvme_admin": false, 00:18:57.347 "nvme_io": false, 00:18:57.347 "nvme_io_md": false, 00:18:57.347 "write_zeroes": true, 00:18:57.347 "zcopy": false, 00:18:57.347 "get_zone_info": false, 00:18:57.347 "zone_management": false, 00:18:57.347 "zone_append": false, 00:18:57.347 "compare": false, 00:18:57.347 "compare_and_write": false, 00:18:57.347 "abort": false, 00:18:57.347 "seek_hole": false, 00:18:57.347 "seek_data": false, 00:18:57.347 "copy": false, 00:18:57.347 "nvme_iov_md": false 00:18:57.347 }, 00:18:57.347 "driver_specific": { 00:18:57.347 "ftl": { 00:18:57.347 "base_bdev": "b9a5f7ec-c754-4b55-8b2a-7c0347491351", 00:18:57.347 "cache": "nvc0n1p0" 00:18:57.347 } 00:18:57.347 } 00:18:57.347 } 00:18:57.347 ]' 00:18:57.347 18:13:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:57.347 18:13:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:57.347 18:13:31 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:57.611 [2024-12-13 18:13:31.816770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.816909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.611 [2024-12-13 18:13:31.816940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:57.611 [2024-12-13 18:13:31.816957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.817011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:57.611 [2024-12-13 18:13:31.817470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.817494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.611 [2024-12-13 18:13:31.817523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:18:57.611 [2024-12-13 18:13:31.817534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.818101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.818126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.611 [2024-12-13 18:13:31.818135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:18:57.611 [2024-12-13 18:13:31.818145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.821786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.821807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.611 [2024-12-13 18:13:31.821817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.616 ms 00:18:57.611 [2024-12-13 18:13:31.821827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.828702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.828736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.611 [2024-12-13 18:13:31.828758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.831 ms 00:18:57.611 [2024-12-13 18:13:31.828769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.830438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.830473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.611 [2024-12-13 18:13:31.830481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:18:57.611 [2024-12-13 18:13:31.830490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.834777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.834812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.611 [2024-12-13 18:13:31.834822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.239 ms 00:18:57.611 [2024-12-13 18:13:31.834833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.835030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.835041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.611 [2024-12-13 18:13:31.835049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:18:57.611 [2024-12-13 18:13:31.835057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.836835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.836869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:57.611 [2024-12-13 18:13:31.836877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:18:57.611 [2024-12-13 18:13:31.836888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.838282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.838313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:57.611 [2024-12-13 18:13:31.838322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:18:57.611 [2024-12-13 18:13:31.838332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.839382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.839492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.611 [2024-12-13 18:13:31.839506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:18:57.611 [2024-12-13 18:13:31.839514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.840616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.611 [2024-12-13 18:13:31.840647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.611 [2024-12-13 18:13:31.840655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:18:57.611 [2024-12-13 18:13:31.840664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.611 [2024-12-13 18:13:31.840707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.611 [2024-12-13 18:13:31.840722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.840996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.841007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.841014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.841024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.841032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.841041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.611 [2024-12-13 18:13:31.841048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.612 [2024-12-13 18:13:31.841585] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.612 [2024-12-13 18:13:31.841593] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:18:57.612 [2024-12-13 18:13:31.841602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:57.612 [2024-12-13 18:13:31.841611] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:57.612 [2024-12-13 18:13:31.841619] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:57.612 [2024-12-13 18:13:31.841626] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:57.612 [2024-12-13 18:13:31.841635] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.612 [2024-12-13 18:13:31.841643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.612 [2024-12-13 18:13:31.841651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.612 [2024-12-13 18:13:31.841658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.612 [2024-12-13 18:13:31.841665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.612 [2024-12-13 18:13:31.841672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.612 [2024-12-13 18:13:31.841681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.612 [2024-12-13 18:13:31.841689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:18:57.612 [2024-12-13 18:13:31.841700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.612 [2024-12-13 18:13:31.843226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.612 [2024-12-13 18:13:31.843485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.612 [2024-12-13 18:13:31.843517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:18:57.612 [2024-12-13 18:13:31.843539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.612 [2024-12-13 18:13:31.843634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.612 [2024-12-13 18:13:31.843727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.612 [2024-12-13 18:13:31.843752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:57.612 [2024-12-13 18:13:31.843773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.612 [2024-12-13 18:13:31.849124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.612 [2024-12-13 18:13:31.849233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.612 [2024-12-13 18:13:31.849300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.612 [2024-12-13 18:13:31.849324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.612 [2024-12-13 18:13:31.849457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.612 [2024-12-13 18:13:31.849526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.612 [2024-12-13 18:13:31.849577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.612 [2024-12-13 18:13:31.849604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.612 [2024-12-13 18:13:31.849677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.612 [2024-12-13 18:13:31.849786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.612 [2024-12-13 18:13:31.849811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.612 [2024-12-13 18:13:31.849831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.612 [2024-12-13 18:13:31.849870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.612 [2024-12-13 18:13:31.849893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.612 [2024-12-13 18:13:31.850015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.612 [2024-12-13 18:13:31.850039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.859408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.859549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.613 [2024-12-13 18:13:31.859601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.859625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.867481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.867618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.613 [2024-12-13 18:13:31.867669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.867695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.867792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.867823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.613 [2024-12-13 18:13:31.867906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.867931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.867995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.868104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.613 [2024-12-13 18:13:31.868127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.868147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.868265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.868308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.613 [2024-12-13 18:13:31.868331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.868413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.868492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.868535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.613 [2024-12-13 18:13:31.868597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.868623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.868690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.868719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.613 [2024-12-13 18:13:31.868741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.868790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.868857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.613 [2024-12-13 18:13:31.868910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.613 [2024-12-13 18:13:31.868962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.613 [2024-12-13 18:13:31.868986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.613 [2024-12-13 18:13:31.869181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.397 ms, result 0 00:18:57.613 true 00:18:57.613 18:13:31 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 88996 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88996 ']' 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88996 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88996 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88996' 00:18:57.613 killing process with pid 88996 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88996 00:18:57.613 18:13:31 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88996 00:19:02.887 18:13:37 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:03.856 65536+0 records in 00:19:03.856 65536+0 records out 00:19:03.856 268435456 bytes (268 MB, 256 MiB) copied, 1.12847 s, 238 MB/s 00:19:03.856 18:13:38 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:03.856 [2024-12-13 18:13:38.211688] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:03.856 [2024-12-13 18:13:38.211833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89161 ] 00:19:04.116 [2024-12-13 18:13:38.359410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.116 [2024-12-13 18:13:38.387915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.377 [2024-12-13 18:13:38.504797] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.377 [2024-12-13 18:13:38.504891] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.377 [2024-12-13 18:13:38.665397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.377 [2024-12-13 18:13:38.665619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.377 [2024-12-13 18:13:38.665643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:04.377 [2024-12-13 18:13:38.665653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.377 [2024-12-13 18:13:38.668230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.377 [2024-12-13 18:13:38.668316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.377 [2024-12-13 18:13:38.668331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:19:04.377 [2024-12-13 18:13:38.668339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.377 [2024-12-13 18:13:38.668456] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.377 [2024-12-13 18:13:38.668717] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.377 [2024-12-13 18:13:38.668735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.668743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.378 [2024-12-13 18:13:38.668753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:04.378 [2024-12-13 18:13:38.668761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.670771] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.378 [2024-12-13 18:13:38.674261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.674426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.378 [2024-12-13 18:13:38.674500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:19:04.378 [2024-12-13 18:13:38.674525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.674715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.674944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.378 [2024-12-13 18:13:38.674975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:04.378 [2024-12-13 18:13:38.675001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.682992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.683153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.378 [2024-12-13 18:13:38.683228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.922 ms 00:19:04.378 [2024-12-13 18:13:38.683274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.683437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.683625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.378 [2024-12-13 18:13:38.683638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:04.378 [2024-12-13 18:13:38.683650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.683683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.683693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.378 [2024-12-13 18:13:38.683707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:04.378 [2024-12-13 18:13:38.683714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.683738] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:04.378 [2024-12-13 18:13:38.685764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.685801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.378 [2024-12-13 18:13:38.685811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:19:04.378 [2024-12-13 18:13:38.685824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.685871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.685883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.378 [2024-12-13 18:13:38.685897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:04.378 [2024-12-13 18:13:38.685904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.685924] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.378 [2024-12-13 18:13:38.685950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:04.378 [2024-12-13 18:13:38.685999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.378 [2024-12-13 18:13:38.686018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:04.378 [2024-12-13 18:13:38.686125] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.378 [2024-12-13 18:13:38.686135] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.378 [2024-12-13 18:13:38.686146] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.378 [2024-12-13 18:13:38.686156] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686164] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:04.378 [2024-12-13 18:13:38.686181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.378 [2024-12-13 18:13:38.686189] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.378 [2024-12-13 18:13:38.686196] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.378 [2024-12-13 18:13:38.686210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.686218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.378 [2024-12-13 18:13:38.686226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:04.378 [2024-12-13 18:13:38.686234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.686341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.378 [2024-12-13 18:13:38.686351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.378 [2024-12-13 18:13:38.686359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:04.378 [2024-12-13 18:13:38.686366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.378 [2024-12-13 18:13:38.686471] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.378 [2024-12-13 18:13:38.686488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.378 [2024-12-13 18:13:38.686497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.378 [2024-12-13 18:13:38.686523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.378 [2024-12-13 18:13:38.686551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.378 [2024-12-13 18:13:38.686567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.378 [2024-12-13 18:13:38.686577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:04.378 [2024-12-13 18:13:38.686593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.378 [2024-12-13 18:13:38.686601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.378 [2024-12-13 18:13:38.686609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:04.378 [2024-12-13 18:13:38.686617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.378 [2024-12-13 18:13:38.686636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.378 [2024-12-13 18:13:38.686659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.378 [2024-12-13 18:13:38.686688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.378 [2024-12-13 18:13:38.686711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.378 [2024-12-13 18:13:38.686734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.378 [2024-12-13 18:13:38.686757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.378 [2024-12-13 18:13:38.686773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.378 [2024-12-13 18:13:38.686780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:04.378 [2024-12-13 18:13:38.686788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.378 [2024-12-13 18:13:38.686794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.378 [2024-12-13 18:13:38.686801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:04.378 [2024-12-13 18:13:38.686809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.378 [2024-12-13 18:13:38.686822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:04.378 [2024-12-13 18:13:38.686829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686835] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.378 [2024-12-13 18:13:38.686842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.378 [2024-12-13 18:13:38.686850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.378 [2024-12-13 18:13:38.686861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.378 [2024-12-13 18:13:38.686869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.378 [2024-12-13 18:13:38.686876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.378 [2024-12-13 18:13:38.686883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.378 [2024-12-13 18:13:38.686890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.378 [2024-12-13 18:13:38.686897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.378 [2024-12-13 18:13:38.686904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.379 [2024-12-13 18:13:38.686913] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.379 [2024-12-13 18:13:38.686922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.379 [2024-12-13 18:13:38.686936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:04.379 [2024-12-13 18:13:38.686943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:04.379 [2024-12-13 18:13:38.686953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:04.379 [2024-12-13 18:13:38.686961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:04.379 [2024-12-13 18:13:38.686969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:04.379 [2024-12-13 18:13:38.686976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:04.379 [2024-12-13 18:13:38.686983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:04.379 [2024-12-13 18:13:38.686995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:04.379 [2024-12-13 18:13:38.687003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:04.379 [2024-12-13 18:13:38.687010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:04.379 [2024-12-13 18:13:38.687017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:04.379 [2024-12-13 18:13:38.687024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:04.379 [2024-12-13 18:13:38.687031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:04.379 [2024-12-13 18:13:38.687038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:04.379 [2024-12-13 18:13:38.687045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.379 [2024-12-13 18:13:38.687057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.379 [2024-12-13 18:13:38.687067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.379 [2024-12-13 18:13:38.687074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.379 [2024-12-13 18:13:38.687081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.379 [2024-12-13 18:13:38.687088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.379 [2024-12-13 18:13:38.687096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.687103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.379 [2024-12-13 18:13:38.687110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:19:04.379 [2024-12-13 18:13:38.687117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.701016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.701198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.379 [2024-12-13 18:13:38.701218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.845 ms 00:19:04.379 [2024-12-13 18:13:38.701227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.701385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.701404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.379 [2024-12-13 18:13:38.701413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:04.379 [2024-12-13 18:13:38.701420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.727670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.727901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.379 [2024-12-13 18:13:38.727927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.224 ms 00:19:04.379 [2024-12-13 18:13:38.727939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.728070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.728087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.379 [2024-12-13 18:13:38.728100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.379 [2024-12-13 18:13:38.728110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.728737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.728779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.379 [2024-12-13 18:13:38.728795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:19:04.379 [2024-12-13 18:13:38.728808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.729014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.729039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.379 [2024-12-13 18:13:38.729051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:19:04.379 [2024-12-13 18:13:38.729061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.737990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.738039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.379 [2024-12-13 18:13:38.738051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.898 ms 00:19:04.379 [2024-12-13 18:13:38.738064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.379 [2024-12-13 18:13:38.742170] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:04.379 [2024-12-13 18:13:38.742407] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.379 [2024-12-13 18:13:38.742427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.379 [2024-12-13 18:13:38.742435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.379 [2024-12-13 18:13:38.742444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.226 ms 00:19:04.379 [2024-12-13 18:13:38.742451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.639 [2024-12-13 18:13:38.760229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.639 [2024-12-13 18:13:38.760307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.639 [2024-12-13 18:13:38.760320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:19:04.639 [2024-12-13 18:13:38.760329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.639 [2024-12-13 18:13:38.763177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.639 [2024-12-13 18:13:38.763225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.639 [2024-12-13 18:13:38.763234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:19:04.639 [2024-12-13 18:13:38.763265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.639 [2024-12-13 18:13:38.765406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.639 [2024-12-13 18:13:38.765588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.639 [2024-12-13 18:13:38.765605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:19:04.639 [2024-12-13 18:13:38.765612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.639 [2024-12-13 18:13:38.765946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.639 [2024-12-13 18:13:38.765962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.639 [2024-12-13 18:13:38.765977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:04.639 [2024-12-13 18:13:38.765984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.639 [2024-12-13 18:13:38.791552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.639 [2024-12-13 18:13:38.791624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:04.639 [2024-12-13 18:13:38.791637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.542 ms 00:19:04.639 [2024-12-13 18:13:38.791647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.800133] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:04.640 [2024-12-13 18:13:38.819481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.640 [2024-12-13 18:13:38.819698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.640 [2024-12-13 18:13:38.819730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.743 ms 00:19:04.640 [2024-12-13 18:13:38.819741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.819840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.640 [2024-12-13 18:13:38.819852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:04.640 [2024-12-13 18:13:38.819862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:04.640 [2024-12-13 18:13:38.819874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.819930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.640 [2024-12-13 18:13:38.819940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.640 [2024-12-13 18:13:38.819949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:04.640 [2024-12-13 18:13:38.819957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.819980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.640 [2024-12-13 18:13:38.819989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.640 [2024-12-13 18:13:38.819998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.640 [2024-12-13 18:13:38.820012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.820050] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:04.640 [2024-12-13 18:13:38.820061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.640 [2024-12-13 18:13:38.820069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:04.640 [2024-12-13 18:13:38.820078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:04.640 [2024-12-13 18:13:38.820090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.826166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.640 [2024-12-13 18:13:38.826220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.640 [2024-12-13 18:13:38.826231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.052 ms 00:19:04.640 [2024-12-13 18:13:38.826264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.826363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.640 [2024-12-13 18:13:38.826375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.640 [2024-12-13 18:13:38.826385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:04.640 [2024-12-13 18:13:38.826394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.640 [2024-12-13 18:13:38.827458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.640 [2024-12-13 18:13:38.828894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.737 ms, result 0 00:19:04.640 [2024-12-13 18:13:38.830012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:04.640 [2024-12-13 18:13:38.837555] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.580  [2024-12-13T18:13:40.900Z] Copying: 11/256 [MB] (11 MBps) [2024-12-13T18:13:41.842Z] Copying: 23/256 [MB] (12 MBps) [2024-12-13T18:13:43.230Z] Copying: 52/256 [MB] (28 MBps) [2024-12-13T18:13:44.176Z] Copying: 71/256 [MB] (19 MBps) [2024-12-13T18:13:45.200Z] Copying: 87/256 [MB] (15 MBps) [2024-12-13T18:13:46.142Z] Copying: 113/256 [MB] (25 MBps) [2024-12-13T18:13:47.086Z] Copying: 124/256 [MB] (11 MBps) [2024-12-13T18:13:48.031Z] Copying: 140/256 [MB] (15 MBps) [2024-12-13T18:13:48.975Z] Copying: 156/256 [MB] (16 MBps) [2024-12-13T18:13:49.919Z] Copying: 187/256 [MB] (30 MBps) [2024-12-13T18:13:50.861Z] Copying: 218/256 [MB] (31 MBps) [2024-12-13T18:13:52.249Z] Copying: 242/256 [MB] (24 MBps) [2024-12-13T18:13:52.249Z] Copying: 255/256 [MB] (12 MBps) [2024-12-13T18:13:52.249Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-13 18:13:51.870881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:17.872 [2024-12-13 18:13:51.873324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.873383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:17.872 [2024-12-13 18:13:51.873399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:17.872 [2024-12-13 18:13:51.873418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.873442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:17.872 [2024-12-13 18:13:51.874386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.874436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:17.872 [2024-12-13 18:13:51.874448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:19:17.872 [2024-12-13 18:13:51.874461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.877557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.877603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:17.872 [2024-12-13 18:13:51.877614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:19:17.872 [2024-12-13 18:13:51.877628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.885876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.885920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:17.872 [2024-12-13 18:13:51.885932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.228 ms 00:19:17.872 [2024-12-13 18:13:51.885949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.892853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.893089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:17.872 [2024-12-13 18:13:51.893109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.846 ms 00:19:17.872 [2024-12-13 18:13:51.893118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.896225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.896283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:17.872 [2024-12-13 18:13:51.896294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.039 ms 00:19:17.872 [2024-12-13 18:13:51.896302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.902093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.902158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:17.872 [2024-12-13 18:13:51.902174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.719 ms 00:19:17.872 [2024-12-13 18:13:51.902182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.902346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.902360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:17.872 [2024-12-13 18:13:51.902369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:17.872 [2024-12-13 18:13:51.902382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.905841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.905886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:17.872 [2024-12-13 18:13:51.905897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:19:17.872 [2024-12-13 18:13:51.905907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.908748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.908794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:17.872 [2024-12-13 18:13:51.908804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:19:17.872 [2024-12-13 18:13:51.908813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.911053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.911104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:17.872 [2024-12-13 18:13:51.911114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:19:17.872 [2024-12-13 18:13:51.911121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.913194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.872 [2024-12-13 18:13:51.913240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:17.872 [2024-12-13 18:13:51.913263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.993 ms 00:19:17.872 [2024-12-13 18:13:51.913270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.872 [2024-12-13 18:13:51.913311] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:17.872 [2024-12-13 18:13:51.913329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:17.872 [2024-12-13 18:13:51.913340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:17.872 [2024-12-13 18:13:51.913348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.913999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:17.873 [2024-12-13 18:13:51.914093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:17.874 [2024-12-13 18:13:51.914101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:17.874 [2024-12-13 18:13:51.914108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:17.874 [2024-12-13 18:13:51.914116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:17.874 [2024-12-13 18:13:51.914124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:17.874 [2024-12-13 18:13:51.914132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:17.874 [2024-12-13 18:13:51.914148] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:17.874 [2024-12-13 18:13:51.914156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:19:17.874 [2024-12-13 18:13:51.914165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:17.874 [2024-12-13 18:13:51.914179] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:17.874 [2024-12-13 18:13:51.914186] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:17.874 [2024-12-13 18:13:51.914194] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:17.874 [2024-12-13 18:13:51.914201] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:17.874 [2024-12-13 18:13:51.914211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:17.874 [2024-12-13 18:13:51.914219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:17.874 [2024-12-13 18:13:51.914225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:17.874 [2024-12-13 18:13:51.914232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:17.874 [2024-12-13 18:13:51.914239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.874 [2024-12-13 18:13:51.914266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:17.874 [2024-12-13 18:13:51.914276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:19:17.874 [2024-12-13 18:13:51.914284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.917804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.874 [2024-12-13 18:13:51.917853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.874 [2024-12-13 18:13:51.917864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:19:17.874 [2024-12-13 18:13:51.917872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.918050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.874 [2024-12-13 18:13:51.918061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.874 [2024-12-13 18:13:51.918071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:17.874 [2024-12-13 18:13:51.918080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.928515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.928562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.874 [2024-12-13 18:13:51.928574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.928583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.928675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.928690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.874 [2024-12-13 18:13:51.928699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.928707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.928759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.928772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.874 [2024-12-13 18:13:51.928782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.928791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.928812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.928825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.874 [2024-12-13 18:13:51.928834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.928843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.948150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.948203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.874 [2024-12-13 18:13:51.948216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.948225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.963621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.963675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.874 [2024-12-13 18:13:51.963689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.963698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.963800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.963811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.874 [2024-12-13 18:13:51.963827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.963836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.963872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.963882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.874 [2024-12-13 18:13:51.963898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.963907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.963993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.964007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.874 [2024-12-13 18:13:51.964017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.964035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.964071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.964084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.874 [2024-12-13 18:13:51.964097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.964105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.964161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.964174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.874 [2024-12-13 18:13:51.964183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.964191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.964276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.874 [2024-12-13 18:13:51.964291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.874 [2024-12-13 18:13:51.964308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.874 [2024-12-13 18:13:51.964334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.874 [2024-12-13 18:13:51.964528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 91.175 ms, result 0 00:19:18.446 00:19:18.447 00:19:18.447 18:13:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89313 00:19:18.447 18:13:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89313 00:19:18.447 18:13:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:18.447 18:13:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89313 ']' 00:19:18.447 18:13:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.447 18:13:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.447 18:13:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.447 18:13:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.447 18:13:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:18.447 [2024-12-13 18:13:52.687529] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:18.447 [2024-12-13 18:13:52.687971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89313 ] 00:19:18.707 [2024-12-13 18:13:52.834418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.707 [2024-12-13 18:13:52.861631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.279 18:13:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.279 18:13:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:19.279 18:13:53 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:19.540 [2024-12-13 18:13:53.730595] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.540 [2024-12-13 18:13:53.730699] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.540 [2024-12-13 18:13:53.911154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.540 [2024-12-13 18:13:53.911222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:19.540 [2024-12-13 18:13:53.911240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:19.540 [2024-12-13 18:13:53.911272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.540 [2024-12-13 18:13:53.914034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.540 [2024-12-13 18:13:53.914090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.540 [2024-12-13 18:13:53.914101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.741 ms 00:19:19.540 [2024-12-13 18:13:53.914111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.540 [2024-12-13 18:13:53.914239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:19.540 [2024-12-13 18:13:53.914575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:19.540 [2024-12-13 18:13:53.914601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.540 [2024-12-13 18:13:53.914613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.540 [2024-12-13 18:13:53.914623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:19:19.540 [2024-12-13 18:13:53.914633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.917071] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:19.802 [2024-12-13 18:13:53.921765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.802 [2024-12-13 18:13:53.922004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:19.802 [2024-12-13 18:13:53.922036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:19:19.802 [2024-12-13 18:13:53.922045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.922230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.802 [2024-12-13 18:13:53.922291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:19.802 [2024-12-13 18:13:53.922309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:19.802 [2024-12-13 18:13:53.922319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.933720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.802 [2024-12-13 18:13:53.933873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.802 [2024-12-13 18:13:53.933941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.341 ms 00:19:19.802 [2024-12-13 18:13:53.933967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.934127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.802 [2024-12-13 18:13:53.934159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.802 [2024-12-13 18:13:53.934294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:19.802 [2024-12-13 18:13:53.934338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.934389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.802 [2024-12-13 18:13:53.934412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:19.802 [2024-12-13 18:13:53.934447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:19.802 [2024-12-13 18:13:53.934521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.934572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:19.802 [2024-12-13 18:13:53.937262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.802 [2024-12-13 18:13:53.937419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.802 [2024-12-13 18:13:53.937489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:19:19.802 [2024-12-13 18:13:53.937518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.937599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.802 [2024-12-13 18:13:53.937627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:19.802 [2024-12-13 18:13:53.937648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:19.802 [2024-12-13 18:13:53.937670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.802 [2024-12-13 18:13:53.937703] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:19.802 [2024-12-13 18:13:53.937793] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:19.803 [2024-12-13 18:13:53.937863] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:19.803 [2024-12-13 18:13:53.937918] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:19.803 [2024-12-13 18:13:53.938054] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:19.803 [2024-12-13 18:13:53.938095] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:19.803 [2024-12-13 18:13:53.938185] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:19.803 [2024-12-13 18:13:53.938225] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:19.803 [2024-12-13 18:13:53.938288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:19.803 [2024-12-13 18:13:53.938325] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:19.803 [2024-12-13 18:13:53.938351] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:19.803 [2024-12-13 18:13:53.938380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:19.803 [2024-12-13 18:13:53.938515] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:19.803 [2024-12-13 18:13:53.938540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.803 [2024-12-13 18:13:53.938560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:19.803 [2024-12-13 18:13:53.938582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:19:19.803 [2024-12-13 18:13:53.938603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.803 [2024-12-13 18:13:53.938733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.803 [2024-12-13 18:13:53.938759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:19.803 [2024-12-13 18:13:53.938852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:19.803 [2024-12-13 18:13:53.938879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.803 [2024-12-13 18:13:53.939005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:19.803 [2024-12-13 18:13:53.939040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:19.803 [2024-12-13 18:13:53.939065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.803 [2024-12-13 18:13:53.939085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:19.803 [2024-12-13 18:13:53.939130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:19.803 [2024-12-13 18:13:53.939171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:19.803 [2024-12-13 18:13:53.939192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.803 [2024-12-13 18:13:53.939322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:19.803 [2024-12-13 18:13:53.939342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:19.803 [2024-12-13 18:13:53.939363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.803 [2024-12-13 18:13:53.939383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:19.803 [2024-12-13 18:13:53.939405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:19.803 [2024-12-13 18:13:53.939425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:19.803 [2024-12-13 18:13:53.939467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:19.803 [2024-12-13 18:13:53.939489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:19.803 [2024-12-13 18:13:53.939579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.803 [2024-12-13 18:13:53.939624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:19.803 [2024-12-13 18:13:53.939642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.803 [2024-12-13 18:13:53.939683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:19.803 [2024-12-13 18:13:53.939752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.803 [2024-12-13 18:13:53.939798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:19.803 [2024-12-13 18:13:53.939816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.803 [2024-12-13 18:13:53.939857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:19.803 [2024-12-13 18:13:53.939914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:19.803 [2024-12-13 18:13:53.939936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.803 [2024-12-13 18:13:53.939958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:19.803 [2024-12-13 18:13:53.939977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:19.803 [2024-12-13 18:13:53.940000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.803 [2024-12-13 18:13:53.940019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:19.803 [2024-12-13 18:13:53.940081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:19.803 [2024-12-13 18:13:53.940103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.803 [2024-12-13 18:13:53.940125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:19.803 [2024-12-13 18:13:53.940144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:19.803 [2024-12-13 18:13:53.940167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.803 [2024-12-13 18:13:53.940187] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:19.803 [2024-12-13 18:13:53.940432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:19.803 [2024-12-13 18:13:53.940475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.803 [2024-12-13 18:13:53.940498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.803 [2024-12-13 18:13:53.940519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:19.803 [2024-12-13 18:13:53.940540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:19.803 [2024-12-13 18:13:53.940559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:19.803 [2024-12-13 18:13:53.940581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:19.803 [2024-12-13 18:13:53.940654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:19.803 [2024-12-13 18:13:53.940684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:19.803 [2024-12-13 18:13:53.940708] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:19.803 [2024-12-13 18:13:53.940742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.803 [2024-12-13 18:13:53.940809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:19.803 [2024-12-13 18:13:53.940850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:19.803 [2024-12-13 18:13:53.940879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:19.804 [2024-12-13 18:13:53.940910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:19.804 [2024-12-13 18:13:53.940974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:19.804 [2024-12-13 18:13:53.941008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:19.804 [2024-12-13 18:13:53.941071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:19.804 [2024-12-13 18:13:53.941106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:19.804 [2024-12-13 18:13:53.941139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:19.804 [2024-12-13 18:13:53.941202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:19.804 [2024-12-13 18:13:53.941410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:19.804 [2024-12-13 18:13:53.941586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:19.804 [2024-12-13 18:13:53.941617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:19.804 [2024-12-13 18:13:53.941652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:19.804 [2024-12-13 18:13:53.941673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:19.804 [2024-12-13 18:13:53.941694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.804 [2024-12-13 18:13:53.941703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:19.804 [2024-12-13 18:13:53.941713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:19.804 [2024-12-13 18:13:53.941721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:19.804 [2024-12-13 18:13:53.941733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:19.804 [2024-12-13 18:13:53.941745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.941757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:19.804 [2024-12-13 18:13:53.941767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:19:19.804 [2024-12-13 18:13:53.941777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:53.961779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.961830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.804 [2024-12-13 18:13:53.961843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.923 ms 00:19:19.804 [2024-12-13 18:13:53.961854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:53.961994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.962011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:19.804 [2024-12-13 18:13:53.962022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:19.804 [2024-12-13 18:13:53.962033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:53.979092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.979320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.804 [2024-12-13 18:13:53.979340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:19:19.804 [2024-12-13 18:13:53.979355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:53.979431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.979449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.804 [2024-12-13 18:13:53.979462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:19.804 [2024-12-13 18:13:53.979473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:53.980145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.980185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.804 [2024-12-13 18:13:53.980197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:19:19.804 [2024-12-13 18:13:53.980208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:53.980420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.980443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.804 [2024-12-13 18:13:53.980453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:19:19.804 [2024-12-13 18:13:53.980463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:53.992039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:53.992096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.804 [2024-12-13 18:13:53.992111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.549 ms 00:19:19.804 [2024-12-13 18:13:53.992122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:54.006557] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:19.804 [2024-12-13 18:13:54.006787] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:19.804 [2024-12-13 18:13:54.006818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:54.006833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:19.804 [2024-12-13 18:13:54.006846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.548 ms 00:19:19.804 [2024-12-13 18:13:54.006859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:54.023430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:54.023485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:19.804 [2024-12-13 18:13:54.023499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.446 ms 00:19:19.804 [2024-12-13 18:13:54.023513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:54.026731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:54.026783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:19.804 [2024-12-13 18:13:54.026793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:19:19.804 [2024-12-13 18:13:54.026803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:54.029511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:54.029694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:19.804 [2024-12-13 18:13:54.029712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:19:19.804 [2024-12-13 18:13:54.029723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:54.030072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:54.030089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:19.804 [2024-12-13 18:13:54.030099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:19:19.804 [2024-12-13 18:13:54.030109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:54.060263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:54.060345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:19.804 [2024-12-13 18:13:54.060360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.128 ms 00:19:19.804 [2024-12-13 18:13:54.060374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.804 [2024-12-13 18:13:54.069337] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:19.804 [2024-12-13 18:13:54.093019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.804 [2024-12-13 18:13:54.093069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:19.805 [2024-12-13 18:13:54.093085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.535 ms 00:19:19.805 [2024-12-13 18:13:54.093096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.805 [2024-12-13 18:13:54.093187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.805 [2024-12-13 18:13:54.093203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:19.805 [2024-12-13 18:13:54.093215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:19.805 [2024-12-13 18:13:54.093224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.805 [2024-12-13 18:13:54.093334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.805 [2024-12-13 18:13:54.093356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:19.805 [2024-12-13 18:13:54.093368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:19.805 [2024-12-13 18:13:54.093376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.805 [2024-12-13 18:13:54.093405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.805 [2024-12-13 18:13:54.093416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:19.805 [2024-12-13 18:13:54.093435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:19.805 [2024-12-13 18:13:54.093445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.805 [2024-12-13 18:13:54.093488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:19.805 [2024-12-13 18:13:54.093500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.805 [2024-12-13 18:13:54.093511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:19.805 [2024-12-13 18:13:54.093519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:19.805 [2024-12-13 18:13:54.093530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.805 [2024-12-13 18:13:54.100146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.805 [2024-12-13 18:13:54.100414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:19.805 [2024-12-13 18:13:54.100437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.593 ms 00:19:19.805 [2024-12-13 18:13:54.100452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.805 [2024-12-13 18:13:54.100542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.805 [2024-12-13 18:13:54.100556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:19.805 [2024-12-13 18:13:54.100566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:19.805 [2024-12-13 18:13:54.100577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.805 [2024-12-13 18:13:54.101834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:19.805 [2024-12-13 18:13:54.103211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 190.301 ms, result 0 00:19:19.805 [2024-12-13 18:13:54.104924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:19.805 Some configs were skipped because the RPC state that can call them passed over. 00:19:19.805 18:13:54 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:20.066 [2024-12-13 18:13:54.339031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.066 [2024-12-13 18:13:54.339217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:20.066 [2024-12-13 18:13:54.339307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:19:20.066 [2024-12-13 18:13:54.339334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.066 [2024-12-13 18:13:54.339410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.575 ms, result 0 00:19:20.066 true 00:19:20.066 18:13:54 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:20.327 [2024-12-13 18:13:54.554907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.327 [2024-12-13 18:13:54.555079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:20.327 [2024-12-13 18:13:54.555137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:19:20.328 [2024-12-13 18:13:54.555162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.328 [2024-12-13 18:13:54.555219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.175 ms, result 0 00:19:20.328 true 00:19:20.328 18:13:54 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89313 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89313 ']' 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89313 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89313 00:19:20.328 killing process with pid 89313 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89313' 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89313 00:19:20.328 18:13:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89313 00:19:20.591 [2024-12-13 18:13:54.782691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.782741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.591 [2024-12-13 18:13:54.782758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.591 [2024-12-13 18:13:54.782769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.782798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:20.591 [2024-12-13 18:13:54.783387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.783410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.591 [2024-12-13 18:13:54.783422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:19:20.591 [2024-12-13 18:13:54.783433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.783739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.783757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.591 [2024-12-13 18:13:54.783768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:19:20.591 [2024-12-13 18:13:54.783781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.788447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.788553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.591 [2024-12-13 18:13:54.788604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:19:20.591 [2024-12-13 18:13:54.788633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.795529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.795633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.591 [2024-12-13 18:13:54.795690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.843 ms 00:19:20.591 [2024-12-13 18:13:54.795716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.797751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.797857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.591 [2024-12-13 18:13:54.797905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:19:20.591 [2024-12-13 18:13:54.797928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.802849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.802989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.591 [2024-12-13 18:13:54.803048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.534 ms 00:19:20.591 [2024-12-13 18:13:54.803078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.803256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.803288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.591 [2024-12-13 18:13:54.803338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:20.591 [2024-12-13 18:13:54.803366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.805813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.805918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:20.591 [2024-12-13 18:13:54.805971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:19:20.591 [2024-12-13 18:13:54.805999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.808491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.808601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:20.591 [2024-12-13 18:13:54.808653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.401 ms 00:19:20.591 [2024-12-13 18:13:54.808678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.810467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.810567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.591 [2024-12-13 18:13:54.810619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:19:20.591 [2024-12-13 18:13:54.810644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.812892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.591 [2024-12-13 18:13:54.813029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.591 [2024-12-13 18:13:54.813088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.859 ms 00:19:20.591 [2024-12-13 18:13:54.813115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.591 [2024-12-13 18:13:54.813162] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.591 [2024-12-13 18:13:54.813195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.591 [2024-12-13 18:13:54.813919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.813994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.814684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.592 [2024-12-13 18:13:54.815268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.592 [2024-12-13 18:13:54.815279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:19:20.592 [2024-12-13 18:13:54.815291] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.592 [2024-12-13 18:13:54.815301] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.592 [2024-12-13 18:13:54.815311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.592 [2024-12-13 18:13:54.815319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.592 [2024-12-13 18:13:54.815328] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.592 [2024-12-13 18:13:54.815340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.592 [2024-12-13 18:13:54.815357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.592 [2024-12-13 18:13:54.815363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.592 [2024-12-13 18:13:54.815371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.592 [2024-12-13 18:13:54.815381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.592 [2024-12-13 18:13:54.815391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.592 [2024-12-13 18:13:54.815401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:19:20.592 [2024-12-13 18:13:54.815417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.592 [2024-12-13 18:13:54.817191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.592 [2024-12-13 18:13:54.817219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.592 [2024-12-13 18:13:54.817230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:19:20.592 [2024-12-13 18:13:54.817253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.592 [2024-12-13 18:13:54.817384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.592 [2024-12-13 18:13:54.817403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.592 [2024-12-13 18:13:54.817414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:20.592 [2024-12-13 18:13:54.817423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.592 [2024-12-13 18:13:54.824421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.592 [2024-12-13 18:13:54.824457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.592 [2024-12-13 18:13:54.824468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.592 [2024-12-13 18:13:54.824479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.592 [2024-12-13 18:13:54.824547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.592 [2024-12-13 18:13:54.824559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.592 [2024-12-13 18:13:54.824568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.592 [2024-12-13 18:13:54.824581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.592 [2024-12-13 18:13:54.824623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.592 [2024-12-13 18:13:54.824635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.592 [2024-12-13 18:13:54.824643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.592 [2024-12-13 18:13:54.824652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.824671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.824682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.593 [2024-12-13 18:13:54.824690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.824701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.837996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.838186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.593 [2024-12-13 18:13:54.838204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.838222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.848685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.848853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.593 [2024-12-13 18:13:54.848869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.848883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.848956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.848973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.593 [2024-12-13 18:13:54.848983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.848993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.849029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.849046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.593 [2024-12-13 18:13:54.849056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.849067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.849143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.849163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.593 [2024-12-13 18:13:54.849173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.849182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.849219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.849232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:20.593 [2024-12-13 18:13:54.849258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.849271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.849317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.849329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.593 [2024-12-13 18:13:54.849340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.849350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.849402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.593 [2024-12-13 18:13:54.849417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.593 [2024-12-13 18:13:54.849431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.593 [2024-12-13 18:13:54.849442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.593 [2024-12-13 18:13:54.849602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.885 ms, result 0 00:19:20.854 18:13:55 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:20.854 18:13:55 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.854 [2024-12-13 18:13:55.207552] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:20.854 [2024-12-13 18:13:55.207678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89350 ] 00:19:21.113 [2024-12-13 18:13:55.354739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.113 [2024-12-13 18:13:55.392625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.375 [2024-12-13 18:13:55.541898] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:21.375 [2024-12-13 18:13:55.541998] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:21.375 [2024-12-13 18:13:55.706287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.706563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:21.375 [2024-12-13 18:13:55.706590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:21.375 [2024-12-13 18:13:55.706601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.709387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.709441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.375 [2024-12-13 18:13:55.709453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:19:21.375 [2024-12-13 18:13:55.709462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.709576] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:21.375 [2024-12-13 18:13:55.709861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:21.375 [2024-12-13 18:13:55.709881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.709894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.375 [2024-12-13 18:13:55.709908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:21.375 [2024-12-13 18:13:55.709915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.712429] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:21.375 [2024-12-13 18:13:55.717234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.717429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:21.375 [2024-12-13 18:13:55.717503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.807 ms 00:19:21.375 [2024-12-13 18:13:55.717528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.717621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.717651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:21.375 [2024-12-13 18:13:55.717673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:21.375 [2024-12-13 18:13:55.717695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.728916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.729076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.375 [2024-12-13 18:13:55.729137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.154 ms 00:19:21.375 [2024-12-13 18:13:55.729160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.729352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.729389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.375 [2024-12-13 18:13:55.729493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:21.375 [2024-12-13 18:13:55.729525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.729573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.729598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:21.375 [2024-12-13 18:13:55.729629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:21.375 [2024-12-13 18:13:55.729696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.729732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:21.375 [2024-12-13 18:13:55.732433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.732583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.375 [2024-12-13 18:13:55.732601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:19:21.375 [2024-12-13 18:13:55.732618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.732670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.732683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:21.375 [2024-12-13 18:13:55.732699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:21.375 [2024-12-13 18:13:55.732707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.732728] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:21.375 [2024-12-13 18:13:55.732755] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:21.375 [2024-12-13 18:13:55.732802] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:21.375 [2024-12-13 18:13:55.732825] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:21.375 [2024-12-13 18:13:55.732939] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:21.375 [2024-12-13 18:13:55.732955] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:21.375 [2024-12-13 18:13:55.732966] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:21.375 [2024-12-13 18:13:55.732977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:21.375 [2024-12-13 18:13:55.732987] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:21.375 [2024-12-13 18:13:55.732998] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:21.375 [2024-12-13 18:13:55.733006] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:21.375 [2024-12-13 18:13:55.733014] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:21.375 [2024-12-13 18:13:55.733023] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:21.375 [2024-12-13 18:13:55.733037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.733046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:21.375 [2024-12-13 18:13:55.733058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:21.375 [2024-12-13 18:13:55.733065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.733154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.375 [2024-12-13 18:13:55.733165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:21.375 [2024-12-13 18:13:55.733174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:21.375 [2024-12-13 18:13:55.733181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.375 [2024-12-13 18:13:55.733316] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:21.376 [2024-12-13 18:13:55.733333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:21.376 [2024-12-13 18:13:55.733343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:21.376 [2024-12-13 18:13:55.733373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:21.376 [2024-12-13 18:13:55.733403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:21.376 [2024-12-13 18:13:55.733418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:21.376 [2024-12-13 18:13:55.733425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:21.376 [2024-12-13 18:13:55.733432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:21.376 [2024-12-13 18:13:55.733442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:21.376 [2024-12-13 18:13:55.733451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:21.376 [2024-12-13 18:13:55.733459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:21.376 [2024-12-13 18:13:55.733475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:21.376 [2024-12-13 18:13:55.733496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:21.376 [2024-12-13 18:13:55.733526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:21.376 [2024-12-13 18:13:55.733547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:21.376 [2024-12-13 18:13:55.733569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:21.376 [2024-12-13 18:13:55.733589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:21.376 [2024-12-13 18:13:55.733602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:21.376 [2024-12-13 18:13:55.733609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:21.376 [2024-12-13 18:13:55.733615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:21.376 [2024-12-13 18:13:55.733623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:21.376 [2024-12-13 18:13:55.733631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:21.376 [2024-12-13 18:13:55.733640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:21.376 [2024-12-13 18:13:55.733653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:21.376 [2024-12-13 18:13:55.733660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733666] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:21.376 [2024-12-13 18:13:55.733674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:21.376 [2024-12-13 18:13:55.733687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.376 [2024-12-13 18:13:55.733703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:21.376 [2024-12-13 18:13:55.733710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:21.376 [2024-12-13 18:13:55.733721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:21.376 [2024-12-13 18:13:55.733729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:21.376 [2024-12-13 18:13:55.733736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:21.376 [2024-12-13 18:13:55.733743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:21.376 [2024-12-13 18:13:55.733752] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:21.376 [2024-12-13 18:13:55.733763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:21.376 [2024-12-13 18:13:55.733775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:21.376 [2024-12-13 18:13:55.733785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:21.376 [2024-12-13 18:13:55.733793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:21.376 [2024-12-13 18:13:55.733801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:21.376 [2024-12-13 18:13:55.733808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:21.376 [2024-12-13 18:13:55.733816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:21.376 [2024-12-13 18:13:55.733823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:21.376 [2024-12-13 18:13:55.733837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:21.376 [2024-12-13 18:13:55.733845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:21.376 [2024-12-13 18:13:55.733853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:21.376 [2024-12-13 18:13:55.733860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:21.376 [2024-12-13 18:13:55.733868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:21.376 [2024-12-13 18:13:55.733875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:21.376 [2024-12-13 18:13:55.733883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:21.376 [2024-12-13 18:13:55.733889] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:21.376 [2024-12-13 18:13:55.733901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:21.376 [2024-12-13 18:13:55.733913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:21.376 [2024-12-13 18:13:55.733920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:21.376 [2024-12-13 18:13:55.733928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:21.376 [2024-12-13 18:13:55.733935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:21.376 [2024-12-13 18:13:55.733945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.376 [2024-12-13 18:13:55.733960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:21.376 [2024-12-13 18:13:55.733976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:19:21.376 [2024-12-13 18:13:55.733988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.753956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.754006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.638 [2024-12-13 18:13:55.754019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.908 ms 00:19:21.638 [2024-12-13 18:13:55.754028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.754166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.754183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:21.638 [2024-12-13 18:13:55.754192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:21.638 [2024-12-13 18:13:55.754200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.778609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.778672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.638 [2024-12-13 18:13:55.778689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.382 ms 00:19:21.638 [2024-12-13 18:13:55.778700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.778823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.778840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.638 [2024-12-13 18:13:55.778851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:21.638 [2024-12-13 18:13:55.778862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.779613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.779661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.638 [2024-12-13 18:13:55.779675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:19:21.638 [2024-12-13 18:13:55.779686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.779877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.779894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.638 [2024-12-13 18:13:55.779905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:19:21.638 [2024-12-13 18:13:55.779915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.791630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.791673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.638 [2024-12-13 18:13:55.791685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.683 ms 00:19:21.638 [2024-12-13 18:13:55.791700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.796602] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:21.638 [2024-12-13 18:13:55.796835] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:21.638 [2024-12-13 18:13:55.796855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.796864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:21.638 [2024-12-13 18:13:55.796874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:19:21.638 [2024-12-13 18:13:55.796882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.813273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.813324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:21.638 [2024-12-13 18:13:55.813346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.319 ms 00:19:21.638 [2024-12-13 18:13:55.813355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.816137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.816358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:21.638 [2024-12-13 18:13:55.816379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:19:21.638 [2024-12-13 18:13:55.816387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.818827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.638 [2024-12-13 18:13:55.818883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:21.638 [2024-12-13 18:13:55.818895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.390 ms 00:19:21.638 [2024-12-13 18:13:55.818902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.638 [2024-12-13 18:13:55.819298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.819315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:21.639 [2024-12-13 18:13:55.819327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:19:21.639 [2024-12-13 18:13:55.819336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.848858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.848921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:21.639 [2024-12-13 18:13:55.848935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.491 ms 00:19:21.639 [2024-12-13 18:13:55.848944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.857790] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:21.639 [2024-12-13 18:13:55.882824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.882895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:21.639 [2024-12-13 18:13:55.882910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.786 ms 00:19:21.639 [2024-12-13 18:13:55.882919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.883032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.883045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:21.639 [2024-12-13 18:13:55.883059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:21.639 [2024-12-13 18:13:55.883068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.883139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.883156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:21.639 [2024-12-13 18:13:55.883166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:21.639 [2024-12-13 18:13:55.883175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.883206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.883216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:21.639 [2024-12-13 18:13:55.883224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:21.639 [2024-12-13 18:13:55.883276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.883324] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:21.639 [2024-12-13 18:13:55.883337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.883346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:21.639 [2024-12-13 18:13:55.883356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:21.639 [2024-12-13 18:13:55.883367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.890083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.890337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:21.639 [2024-12-13 18:13:55.890358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.692 ms 00:19:21.639 [2024-12-13 18:13:55.890368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.890471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.639 [2024-12-13 18:13:55.890484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:21.639 [2024-12-13 18:13:55.890496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:21.639 [2024-12-13 18:13:55.890505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.639 [2024-12-13 18:13:55.892419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:21.639 [2024-12-13 18:13:55.893830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 185.720 ms, result 0 00:19:21.639 [2024-12-13 18:13:55.895234] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:21.639 [2024-12-13 18:13:55.902648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.581  [2024-12-13T18:13:58.346Z] Copying: 20/256 [MB] (20 MBps) [2024-12-13T18:13:58.920Z] Copying: 41/256 [MB] (21 MBps) [2024-12-13T18:14:00.310Z] Copying: 63/256 [MB] (22 MBps) [2024-12-13T18:14:01.246Z] Copying: 86/256 [MB] (22 MBps) [2024-12-13T18:14:02.212Z] Copying: 106/256 [MB] (20 MBps) [2024-12-13T18:14:03.231Z] Copying: 122/256 [MB] (15 MBps) [2024-12-13T18:14:04.176Z] Copying: 142/256 [MB] (20 MBps) [2024-12-13T18:14:05.119Z] Copying: 152/256 [MB] (10 MBps) [2024-12-13T18:14:06.059Z] Copying: 172/256 [MB] (19 MBps) [2024-12-13T18:14:07.000Z] Copying: 182/256 [MB] (10 MBps) [2024-12-13T18:14:07.942Z] Copying: 205/256 [MB] (22 MBps) [2024-12-13T18:14:08.886Z] Copying: 228/256 [MB] (23 MBps) [2024-12-13T18:14:08.886Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-13 18:14:08.748396] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.509 [2024-12-13 18:14:08.749732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.749774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.509 [2024-12-13 18:14:08.749787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.509 [2024-12-13 18:14:08.749795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.749816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:34.509 [2024-12-13 18:14:08.750323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.750396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.509 [2024-12-13 18:14:08.750410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:19:34.509 [2024-12-13 18:14:08.750418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.750676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.750691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.509 [2024-12-13 18:14:08.750708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:19:34.509 [2024-12-13 18:14:08.750722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.754423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.754443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.509 [2024-12-13 18:14:08.754453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.687 ms 00:19:34.509 [2024-12-13 18:14:08.754461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.761319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.761463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.509 [2024-12-13 18:14:08.761480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.839 ms 00:19:34.509 [2024-12-13 18:14:08.761494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.763975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.764013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.509 [2024-12-13 18:14:08.764022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:19:34.509 [2024-12-13 18:14:08.764029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.768298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.768465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.509 [2024-12-13 18:14:08.768481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:19:34.509 [2024-12-13 18:14:08.768488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.768616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.768626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.509 [2024-12-13 18:14:08.768635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:34.509 [2024-12-13 18:14:08.768644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.771418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.771455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.509 [2024-12-13 18:14:08.771464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:19:34.509 [2024-12-13 18:14:08.771471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.773498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.773537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.509 [2024-12-13 18:14:08.773545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:19:34.509 [2024-12-13 18:14:08.773552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.774974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.775112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.509 [2024-12-13 18:14:08.775127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:19:34.509 [2024-12-13 18:14:08.775133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.776496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-12-13 18:14:08.776531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.509 [2024-12-13 18:14:08.776539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.301 ms 00:19:34.509 [2024-12-13 18:14:08.776545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-12-13 18:14:08.776580] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.509 [2024-12-13 18:14:08.776594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.509 [2024-12-13 18:14:08.776832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.776994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.510 [2024-12-13 18:14:08.777369] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.510 [2024-12-13 18:14:08.777376] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:19:34.510 [2024-12-13 18:14:08.777385] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.510 [2024-12-13 18:14:08.777392] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.510 [2024-12-13 18:14:08.777406] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.510 [2024-12-13 18:14:08.777413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.510 [2024-12-13 18:14:08.777420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.510 [2024-12-13 18:14:08.777428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.510 [2024-12-13 18:14:08.777438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.510 [2024-12-13 18:14:08.777444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.510 [2024-12-13 18:14:08.777450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.510 [2024-12-13 18:14:08.777457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.510 [2024-12-13 18:14:08.777465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.510 [2024-12-13 18:14:08.777473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:19:34.510 [2024-12-13 18:14:08.777481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.510 [2024-12-13 18:14:08.779478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.510 [2024-12-13 18:14:08.779504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.510 [2024-12-13 18:14:08.779513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.979 ms 00:19:34.510 [2024-12-13 18:14:08.779526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.510 [2024-12-13 18:14:08.779618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.510 [2024-12-13 18:14:08.779626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.510 [2024-12-13 18:14:08.779634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:34.510 [2024-12-13 18:14:08.779641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.510 [2024-12-13 18:14:08.785777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.510 [2024-12-13 18:14:08.785908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.510 [2024-12-13 18:14:08.785970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.510 [2024-12-13 18:14:08.785996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.510 [2024-12-13 18:14:08.786106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.510 [2024-12-13 18:14:08.786132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.511 [2024-12-13 18:14:08.786177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.786199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.786275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.786330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.511 [2024-12-13 18:14:08.786352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.786371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.786407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.786427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.511 [2024-12-13 18:14:08.786480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.786502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.796895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.797047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.511 [2024-12-13 18:14:08.797106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.797132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.805169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.805328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.511 [2024-12-13 18:14:08.805380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.805402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.805464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.805489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.511 [2024-12-13 18:14:08.805509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.805527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.805566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.805592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.511 [2024-12-13 18:14:08.805613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.805683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.805777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.805802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.511 [2024-12-13 18:14:08.805830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.805850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.805894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.805924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:34.511 [2024-12-13 18:14:08.805944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.805963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.806075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.806110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.511 [2024-12-13 18:14:08.806130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.806149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.806204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.511 [2024-12-13 18:14:08.806681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.511 [2024-12-13 18:14:08.806809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.511 [2024-12-13 18:14:08.806833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.511 [2024-12-13 18:14:08.806992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.232 ms, result 0 00:19:34.771 00:19:34.771 00:19:34.771 18:14:08 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:34.771 18:14:08 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:35.343 18:14:09 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:35.343 [2024-12-13 18:14:09.618712] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:35.344 [2024-12-13 18:14:09.619039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89505 ] 00:19:35.605 [2024-12-13 18:14:09.764299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.605 [2024-12-13 18:14:09.793940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.605 [2024-12-13 18:14:09.904012] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.605 [2024-12-13 18:14:09.904094] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.867 [2024-12-13 18:14:10.065553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.065621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.867 [2024-12-13 18:14:10.065637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:35.867 [2024-12-13 18:14:10.065647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.068234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.068308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.867 [2024-12-13 18:14:10.068320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:19:35.867 [2024-12-13 18:14:10.068328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.068474] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.867 [2024-12-13 18:14:10.068744] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.867 [2024-12-13 18:14:10.068762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.068771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.867 [2024-12-13 18:14:10.068785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:35.867 [2024-12-13 18:14:10.068792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.070811] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.867 [2024-12-13 18:14:10.074966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.075030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.867 [2024-12-13 18:14:10.075046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.158 ms 00:19:35.867 [2024-12-13 18:14:10.075054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.075146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.075157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.867 [2024-12-13 18:14:10.075167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:35.867 [2024-12-13 18:14:10.075175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.083627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.083675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.867 [2024-12-13 18:14:10.083687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.405 ms 00:19:35.867 [2024-12-13 18:14:10.083695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.083848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.083861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.867 [2024-12-13 18:14:10.083869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:35.867 [2024-12-13 18:14:10.083880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.083907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.083916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.867 [2024-12-13 18:14:10.083924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:35.867 [2024-12-13 18:14:10.083935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.083958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.867 [2024-12-13 18:14:10.086124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.086342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.867 [2024-12-13 18:14:10.086362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:19:35.867 [2024-12-13 18:14:10.086380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.086433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.086444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.867 [2024-12-13 18:14:10.086458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:35.867 [2024-12-13 18:14:10.086465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.867 [2024-12-13 18:14:10.086486] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.867 [2024-12-13 18:14:10.086506] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.867 [2024-12-13 18:14:10.086552] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.867 [2024-12-13 18:14:10.086571] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.867 [2024-12-13 18:14:10.086677] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.867 [2024-12-13 18:14:10.086688] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.867 [2024-12-13 18:14:10.086700] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.867 [2024-12-13 18:14:10.086710] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.867 [2024-12-13 18:14:10.086719] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.867 [2024-12-13 18:14:10.086727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:35.867 [2024-12-13 18:14:10.086735] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.867 [2024-12-13 18:14:10.086742] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.867 [2024-12-13 18:14:10.086754] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.867 [2024-12-13 18:14:10.086765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.867 [2024-12-13 18:14:10.086773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.867 [2024-12-13 18:14:10.086786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:19:35.868 [2024-12-13 18:14:10.086794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.868 [2024-12-13 18:14:10.086882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.868 [2024-12-13 18:14:10.086892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.868 [2024-12-13 18:14:10.086900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:35.868 [2024-12-13 18:14:10.086909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.868 [2024-12-13 18:14:10.087008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.868 [2024-12-13 18:14:10.087021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.868 [2024-12-13 18:14:10.087031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.868 [2024-12-13 18:14:10.087058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.868 [2024-12-13 18:14:10.087085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.868 [2024-12-13 18:14:10.087102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.868 [2024-12-13 18:14:10.087109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:35.868 [2024-12-13 18:14:10.087117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.868 [2024-12-13 18:14:10.087124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.868 [2024-12-13 18:14:10.087134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:35.868 [2024-12-13 18:14:10.087142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.868 [2024-12-13 18:14:10.087159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.868 [2024-12-13 18:14:10.087183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.868 [2024-12-13 18:14:10.087212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.868 [2024-12-13 18:14:10.087235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.868 [2024-12-13 18:14:10.087272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.868 [2024-12-13 18:14:10.087292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.868 [2024-12-13 18:14:10.087306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.868 [2024-12-13 18:14:10.087312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:35.868 [2024-12-13 18:14:10.087319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.868 [2024-12-13 18:14:10.087326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.868 [2024-12-13 18:14:10.087332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:35.868 [2024-12-13 18:14:10.087342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.868 [2024-12-13 18:14:10.087355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:35.868 [2024-12-13 18:14:10.087363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087370] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.868 [2024-12-13 18:14:10.087377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.868 [2024-12-13 18:14:10.087385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.868 [2024-12-13 18:14:10.087401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.868 [2024-12-13 18:14:10.087408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.868 [2024-12-13 18:14:10.087417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.868 [2024-12-13 18:14:10.087424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.868 [2024-12-13 18:14:10.087431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.868 [2024-12-13 18:14:10.087438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.868 [2024-12-13 18:14:10.087446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.868 [2024-12-13 18:14:10.087456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.868 [2024-12-13 18:14:10.087468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:35.868 [2024-12-13 18:14:10.087475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:35.868 [2024-12-13 18:14:10.087483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:35.868 [2024-12-13 18:14:10.087490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:35.868 [2024-12-13 18:14:10.087497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:35.868 [2024-12-13 18:14:10.087504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:35.868 [2024-12-13 18:14:10.087511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:35.868 [2024-12-13 18:14:10.087523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:35.868 [2024-12-13 18:14:10.087530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:35.868 [2024-12-13 18:14:10.087538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:35.868 [2024-12-13 18:14:10.087545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:35.868 [2024-12-13 18:14:10.087553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:35.868 [2024-12-13 18:14:10.087559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:35.868 [2024-12-13 18:14:10.087566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:35.868 [2024-12-13 18:14:10.087573] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.868 [2024-12-13 18:14:10.087583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.868 [2024-12-13 18:14:10.087595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.868 [2024-12-13 18:14:10.087602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.868 [2024-12-13 18:14:10.087609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.868 [2024-12-13 18:14:10.087616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.868 [2024-12-13 18:14:10.087624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.868 [2024-12-13 18:14:10.087635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.868 [2024-12-13 18:14:10.087642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:19:35.868 [2024-12-13 18:14:10.087649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.868 [2024-12-13 18:14:10.101036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.868 [2024-12-13 18:14:10.101218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.868 [2024-12-13 18:14:10.101267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.334 ms 00:19:35.868 [2024-12-13 18:14:10.101277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.868 [2024-12-13 18:14:10.101411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.868 [2024-12-13 18:14:10.101427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.868 [2024-12-13 18:14:10.101436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:35.868 [2024-12-13 18:14:10.101447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.868 [2024-12-13 18:14:10.123687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.868 [2024-12-13 18:14:10.123760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.868 [2024-12-13 18:14:10.123781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.214 ms 00:19:35.868 [2024-12-13 18:14:10.123796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.868 [2024-12-13 18:14:10.123940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.868 [2024-12-13 18:14:10.123961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.868 [2024-12-13 18:14:10.123977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.868 [2024-12-13 18:14:10.123991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.868 [2024-12-13 18:14:10.124580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.868 [2024-12-13 18:14:10.124644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.869 [2024-12-13 18:14:10.124663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:35.869 [2024-12-13 18:14:10.124679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.124912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.124937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.869 [2024-12-13 18:14:10.124957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:19:35.869 [2024-12-13 18:14:10.124971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.133185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.133231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.869 [2024-12-13 18:14:10.133241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.179 ms 00:19:35.869 [2024-12-13 18:14:10.133275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.136836] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:35.869 [2024-12-13 18:14:10.136884] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.869 [2024-12-13 18:14:10.136897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.136905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.869 [2024-12-13 18:14:10.136914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.513 ms 00:19:35.869 [2024-12-13 18:14:10.136921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.152495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.152552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.869 [2024-12-13 18:14:10.152564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.514 ms 00:19:35.869 [2024-12-13 18:14:10.152571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.155506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.155556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.869 [2024-12-13 18:14:10.155567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:19:35.869 [2024-12-13 18:14:10.155574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.158311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.158499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.869 [2024-12-13 18:14:10.158516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:19:35.869 [2024-12-13 18:14:10.158524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.158855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.158868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.869 [2024-12-13 18:14:10.158878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:35.869 [2024-12-13 18:14:10.158885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.181855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.182059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.869 [2024-12-13 18:14:10.182078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.947 ms 00:19:35.869 [2024-12-13 18:14:10.182087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.190211] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.869 [2024-12-13 18:14:10.207584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.207643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.869 [2024-12-13 18:14:10.207659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.414 ms 00:19:35.869 [2024-12-13 18:14:10.207667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.207753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.207765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.869 [2024-12-13 18:14:10.207781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:35.869 [2024-12-13 18:14:10.207789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.207843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.207852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.869 [2024-12-13 18:14:10.207864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:35.869 [2024-12-13 18:14:10.207872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.207893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.207901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.869 [2024-12-13 18:14:10.207909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.869 [2024-12-13 18:14:10.207919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.207953] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.869 [2024-12-13 18:14:10.207967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.207978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.869 [2024-12-13 18:14:10.207989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:35.869 [2024-12-13 18:14:10.207997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.213606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.213653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.869 [2024-12-13 18:14:10.213664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.590 ms 00:19:35.869 [2024-12-13 18:14:10.213672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.213764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.869 [2024-12-13 18:14:10.213774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.869 [2024-12-13 18:14:10.213783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:35.869 [2024-12-13 18:14:10.213791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.869 [2024-12-13 18:14:10.214771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.869 [2024-12-13 18:14:10.216002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.923 ms, result 0 00:19:35.869 [2024-12-13 18:14:10.217097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.869 [2024-12-13 18:14:10.224759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.443  [2024-12-13T18:14:10.820Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-12-13 18:14:10.526407] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.443 [2024-12-13 18:14:10.527968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.528137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.443 [2024-12-13 18:14:10.528209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:36.443 [2024-12-13 18:14:10.528234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.528297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:36.443 [2024-12-13 18:14:10.529020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.529185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.443 [2024-12-13 18:14:10.529275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:19:36.443 [2024-12-13 18:14:10.529302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.531502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.531586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.443 [2024-12-13 18:14:10.531617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:19:36.443 [2024-12-13 18:14:10.531638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.535975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.536133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.443 [2024-12-13 18:14:10.536203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.308 ms 00:19:36.443 [2024-12-13 18:14:10.536227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.543200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.543376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:36.443 [2024-12-13 18:14:10.543396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:19:36.443 [2024-12-13 18:14:10.543413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.546180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.546224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.443 [2024-12-13 18:14:10.546236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.693 ms 00:19:36.443 [2024-12-13 18:14:10.546260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.551648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.551830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.443 [2024-12-13 18:14:10.551945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.337 ms 00:19:36.443 [2024-12-13 18:14:10.551970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.552120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.552148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.443 [2024-12-13 18:14:10.552173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:36.443 [2024-12-13 18:14:10.552195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.556124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.556311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:36.443 [2024-12-13 18:14:10.556416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.898 ms 00:19:36.443 [2024-12-13 18:14:10.556441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.443 [2024-12-13 18:14:10.559451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.443 [2024-12-13 18:14:10.559613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:36.443 [2024-12-13 18:14:10.559668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:19:36.444 [2024-12-13 18:14:10.559689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.444 [2024-12-13 18:14:10.562917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.444 [2024-12-13 18:14:10.563319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.444 [2024-12-13 18:14:10.563497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:19:36.444 [2024-12-13 18:14:10.563632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.444 [2024-12-13 18:14:10.566731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.444 [2024-12-13 18:14:10.567021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.444 [2024-12-13 18:14:10.567216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:19:36.444 [2024-12-13 18:14:10.567345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.444 [2024-12-13 18:14:10.567566] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.444 [2024-12-13 18:14:10.567719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.567957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.568108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.568215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.568439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.568635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.568736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.568892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.568988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.569240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.569463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.569568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.569747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.569859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.569952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.570054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.570164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.570285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.570380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.570601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.570829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.571991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.444 [2024-12-13 18:14:10.572582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.445 [2024-12-13 18:14:10.572842] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.445 [2024-12-13 18:14:10.572850] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:19:36.445 [2024-12-13 18:14:10.572859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.445 [2024-12-13 18:14:10.572872] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.445 [2024-12-13 18:14:10.572879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.445 [2024-12-13 18:14:10.572887] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.445 [2024-12-13 18:14:10.572895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.445 [2024-12-13 18:14:10.572906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.445 [2024-12-13 18:14:10.572913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.445 [2024-12-13 18:14:10.572920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.445 [2024-12-13 18:14:10.572926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.445 [2024-12-13 18:14:10.572935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.445 [2024-12-13 18:14:10.572943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.445 [2024-12-13 18:14:10.572952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.377 ms 00:19:36.445 [2024-12-13 18:14:10.572960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.575394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.445 [2024-12-13 18:14:10.575555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.445 [2024-12-13 18:14:10.575618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:19:36.445 [2024-12-13 18:14:10.575699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.575857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.445 [2024-12-13 18:14:10.575911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.445 [2024-12-13 18:14:10.576016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:36.445 [2024-12-13 18:14:10.576039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.584071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.584222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.445 [2024-12-13 18:14:10.584300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.584333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.584434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.584458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.445 [2024-12-13 18:14:10.584479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.584498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.584557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.584661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.445 [2024-12-13 18:14:10.584689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.584708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.584740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.584760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.445 [2024-12-13 18:14:10.584778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.584844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.597777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.597959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.445 [2024-12-13 18:14:10.598011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.598040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.607676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.607843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.445 [2024-12-13 18:14:10.607895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.607917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.607960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.607982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.445 [2024-12-13 18:14:10.608002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.608021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.608070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.608190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.445 [2024-12-13 18:14:10.608210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.608229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.608373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.608402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.445 [2024-12-13 18:14:10.608756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.608806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.608897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.608931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.445 [2024-12-13 18:14:10.608953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.608977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.609032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.609119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.445 [2024-12-13 18:14:10.609145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.609165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.609381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.445 [2024-12-13 18:14:10.609403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.445 [2024-12-13 18:14:10.609412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.445 [2024-12-13 18:14:10.609422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.445 [2024-12-13 18:14:10.609572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.576 ms, result 0 00:19:36.445 00:19:36.445 00:19:36.706 18:14:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89525 00:19:36.706 18:14:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89525 00:19:36.706 18:14:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:36.706 18:14:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89525 ']' 00:19:36.706 18:14:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.706 18:14:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.706 18:14:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.706 18:14:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.706 18:14:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:36.706 [2024-12-13 18:14:10.899575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:36.706 [2024-12-13 18:14:10.900143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89525 ] 00:19:36.706 [2024-12-13 18:14:11.041722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.706 [2024-12-13 18:14:11.070335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.649 18:14:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.649 18:14:11 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:37.649 18:14:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:37.649 [2024-12-13 18:14:11.968293] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.649 [2024-12-13 18:14:11.968390] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.910 [2024-12-13 18:14:12.145650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.910 [2024-12-13 18:14:12.145892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:37.910 [2024-12-13 18:14:12.145921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.910 [2024-12-13 18:14:12.145932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.910 [2024-12-13 18:14:12.148555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.910 [2024-12-13 18:14:12.148608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.910 [2024-12-13 18:14:12.148619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:19:37.910 [2024-12-13 18:14:12.148629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.148765] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:37.911 [2024-12-13 18:14:12.149040] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:37.911 [2024-12-13 18:14:12.149059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.149071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.911 [2024-12-13 18:14:12.149082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:19:37.911 [2024-12-13 18:14:12.149092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.150978] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:37.911 [2024-12-13 18:14:12.155303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.155485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:37.911 [2024-12-13 18:14:12.155558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.320 ms 00:19:37.911 [2024-12-13 18:14:12.155572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.155650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.155663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:37.911 [2024-12-13 18:14:12.155678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:37.911 [2024-12-13 18:14:12.155685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.163972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.164019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.911 [2024-12-13 18:14:12.164031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.226 ms 00:19:37.911 [2024-12-13 18:14:12.164039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.164189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.164202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.911 [2024-12-13 18:14:12.164213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:37.911 [2024-12-13 18:14:12.164225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.164297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.164306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:37.911 [2024-12-13 18:14:12.164320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:37.911 [2024-12-13 18:14:12.164330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.164387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:37.911 [2024-12-13 18:14:12.166458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.166500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.911 [2024-12-13 18:14:12.166513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:19:37.911 [2024-12-13 18:14:12.166523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.166567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.166578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:37.911 [2024-12-13 18:14:12.166587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:37.911 [2024-12-13 18:14:12.166596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.166617] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:37.911 [2024-12-13 18:14:12.166643] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:37.911 [2024-12-13 18:14:12.166680] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:37.911 [2024-12-13 18:14:12.166706] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:37.911 [2024-12-13 18:14:12.166812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:37.911 [2024-12-13 18:14:12.166831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:37.911 [2024-12-13 18:14:12.166842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:37.911 [2024-12-13 18:14:12.166854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:37.911 [2024-12-13 18:14:12.166864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:37.911 [2024-12-13 18:14:12.166877] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:37.911 [2024-12-13 18:14:12.166884] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:37.911 [2024-12-13 18:14:12.166895] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:37.911 [2024-12-13 18:14:12.166905] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:37.911 [2024-12-13 18:14:12.166915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.166922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:37.911 [2024-12-13 18:14:12.166932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:37.911 [2024-12-13 18:14:12.166940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.167030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.911 [2024-12-13 18:14:12.167038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:37.911 [2024-12-13 18:14:12.167048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:37.911 [2024-12-13 18:14:12.167054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.911 [2024-12-13 18:14:12.167160] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:37.911 [2024-12-13 18:14:12.167171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:37.911 [2024-12-13 18:14:12.167184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:37.911 [2024-12-13 18:14:12.167216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:37.911 [2024-12-13 18:14:12.167268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.911 [2024-12-13 18:14:12.167288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:37.911 [2024-12-13 18:14:12.167295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:37.911 [2024-12-13 18:14:12.167305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.911 [2024-12-13 18:14:12.167314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:37.911 [2024-12-13 18:14:12.167324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:37.911 [2024-12-13 18:14:12.167332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:37.911 [2024-12-13 18:14:12.167352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:37.911 [2024-12-13 18:14:12.167384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:37.911 [2024-12-13 18:14:12.167411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:37.911 [2024-12-13 18:14:12.167439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:37.911 [2024-12-13 18:14:12.167465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:37.911 [2024-12-13 18:14:12.167494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.911 [2024-12-13 18:14:12.167511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:37.911 [2024-12-13 18:14:12.167519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:37.911 [2024-12-13 18:14:12.167529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.911 [2024-12-13 18:14:12.167536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:37.911 [2024-12-13 18:14:12.167545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:37.911 [2024-12-13 18:14:12.167552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:37.911 [2024-12-13 18:14:12.167567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:37.911 [2024-12-13 18:14:12.167576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167583] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:37.911 [2024-12-13 18:14:12.167597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:37.911 [2024-12-13 18:14:12.167618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.911 [2024-12-13 18:14:12.167632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.911 [2024-12-13 18:14:12.167640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:37.911 [2024-12-13 18:14:12.167649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:37.912 [2024-12-13 18:14:12.167656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:37.912 [2024-12-13 18:14:12.167664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:37.912 [2024-12-13 18:14:12.167671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:37.912 [2024-12-13 18:14:12.167682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:37.912 [2024-12-13 18:14:12.167690] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:37.912 [2024-12-13 18:14:12.167703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.912 [2024-12-13 18:14:12.167712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:37.912 [2024-12-13 18:14:12.167722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:37.912 [2024-12-13 18:14:12.167729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:37.912 [2024-12-13 18:14:12.167738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:37.912 [2024-12-13 18:14:12.167746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:37.912 [2024-12-13 18:14:12.167755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:37.912 [2024-12-13 18:14:12.167762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:37.912 [2024-12-13 18:14:12.167771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:37.912 [2024-12-13 18:14:12.167778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:37.912 [2024-12-13 18:14:12.167787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:37.912 [2024-12-13 18:14:12.167794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:37.912 [2024-12-13 18:14:12.167809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:37.912 [2024-12-13 18:14:12.167817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:37.912 [2024-12-13 18:14:12.167828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:37.912 [2024-12-13 18:14:12.167836] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:37.912 [2024-12-13 18:14:12.167848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.912 [2024-12-13 18:14:12.167856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:37.912 [2024-12-13 18:14:12.167865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:37.912 [2024-12-13 18:14:12.167872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:37.912 [2024-12-13 18:14:12.167882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:37.912 [2024-12-13 18:14:12.167893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.167902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:37.912 [2024-12-13 18:14:12.167910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:19:37.912 [2024-12-13 18:14:12.167921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.181023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.181205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.912 [2024-12-13 18:14:12.181223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.045 ms 00:19:37.912 [2024-12-13 18:14:12.181233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.181397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.181419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.912 [2024-12-13 18:14:12.181428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:37.912 [2024-12-13 18:14:12.181438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.192720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.192767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.912 [2024-12-13 18:14:12.192779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.258 ms 00:19:37.912 [2024-12-13 18:14:12.192792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.192852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.192865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.912 [2024-12-13 18:14:12.192874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:37.912 [2024-12-13 18:14:12.192884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.193373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.193403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.912 [2024-12-13 18:14:12.193414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:19:37.912 [2024-12-13 18:14:12.193426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.193583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.193604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.912 [2024-12-13 18:14:12.193613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:37.912 [2024-12-13 18:14:12.193624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.201141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.201190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.912 [2024-12-13 18:14:12.201200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.494 ms 00:19:37.912 [2024-12-13 18:14:12.201214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.211842] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:37.912 [2024-12-13 18:14:12.211899] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:37.912 [2024-12-13 18:14:12.211917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.211929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:37.912 [2024-12-13 18:14:12.211939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.567 ms 00:19:37.912 [2024-12-13 18:14:12.211948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.229655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.229709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:37.912 [2024-12-13 18:14:12.229723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.649 ms 00:19:37.912 [2024-12-13 18:14:12.229735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.232311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.232367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:37.912 [2024-12-13 18:14:12.232377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:19:37.912 [2024-12-13 18:14:12.232387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.235014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.235064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:37.912 [2024-12-13 18:14:12.235074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:19:37.912 [2024-12-13 18:14:12.235083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.235452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.235468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.912 [2024-12-13 18:14:12.235478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:19:37.912 [2024-12-13 18:14:12.235488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.257458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.257522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:37.912 [2024-12-13 18:14:12.257541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.946 ms 00:19:37.912 [2024-12-13 18:14:12.257554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.265591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:37.912 [2024-12-13 18:14:12.282881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.283098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.912 [2024-12-13 18:14:12.283128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.229 ms 00:19:37.912 [2024-12-13 18:14:12.283136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.283221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.283236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:37.912 [2024-12-13 18:14:12.283284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:37.912 [2024-12-13 18:14:12.283292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.283353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.283365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.912 [2024-12-13 18:14:12.283376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:37.912 [2024-12-13 18:14:12.283383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.283410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.283419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.912 [2024-12-13 18:14:12.283436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.912 [2024-12-13 18:14:12.283443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.912 [2024-12-13 18:14:12.283479] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:37.912 [2024-12-13 18:14:12.283488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.912 [2024-12-13 18:14:12.283498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:37.912 [2024-12-13 18:14:12.283506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:37.912 [2024-12-13 18:14:12.283516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.172 [2024-12-13 18:14:12.288450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.172 [2024-12-13 18:14:12.288500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.172 [2024-12-13 18:14:12.288511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:19:38.172 [2024-12-13 18:14:12.288524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.173 [2024-12-13 18:14:12.288613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.173 [2024-12-13 18:14:12.288625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.173 [2024-12-13 18:14:12.288634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:38.173 [2024-12-13 18:14:12.288644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.173 [2024-12-13 18:14:12.289623] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.173 [2024-12-13 18:14:12.290934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 143.657 ms, result 0 00:19:38.173 [2024-12-13 18:14:12.292737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.173 Some configs were skipped because the RPC state that can call them passed over. 00:19:38.173 18:14:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:38.173 [2024-12-13 18:14:12.530914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.173 [2024-12-13 18:14:12.531121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:38.173 [2024-12-13 18:14:12.531195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:19:38.173 [2024-12-13 18:14:12.531222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.173 [2024-12-13 18:14:12.531299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.395 ms, result 0 00:19:38.173 true 00:19:38.434 18:14:12 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:38.434 [2024-12-13 18:14:12.742905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.434 [2024-12-13 18:14:12.743092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:38.434 [2024-12-13 18:14:12.743157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.788 ms 00:19:38.434 [2024-12-13 18:14:12.743184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.434 [2024-12-13 18:14:12.743258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.125 ms, result 0 00:19:38.434 true 00:19:38.434 18:14:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89525 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89525 ']' 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89525 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89525 00:19:38.434 killing process with pid 89525 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89525' 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89525 00:19:38.434 18:14:12 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89525 00:19:38.696 [2024-12-13 18:14:12.908676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.696 [2024-12-13 18:14:12.908732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:38.696 [2024-12-13 18:14:12.908746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:38.696 [2024-12-13 18:14:12.908754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.696 [2024-12-13 18:14:12.908782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:38.696 [2024-12-13 18:14:12.909295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.696 [2024-12-13 18:14:12.909320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:38.696 [2024-12-13 18:14:12.909331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:19:38.696 [2024-12-13 18:14:12.909340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.696 [2024-12-13 18:14:12.909629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.696 [2024-12-13 18:14:12.909656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:38.696 [2024-12-13 18:14:12.909665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:38.696 [2024-12-13 18:14:12.909675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.696 [2024-12-13 18:14:12.913863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.696 [2024-12-13 18:14:12.913900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:38.696 [2024-12-13 18:14:12.913909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.169 ms 00:19:38.696 [2024-12-13 18:14:12.913920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.696 [2024-12-13 18:14:12.920814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.696 [2024-12-13 18:14:12.920846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:38.696 [2024-12-13 18:14:12.920856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:19:38.696 [2024-12-13 18:14:12.920867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.696 [2024-12-13 18:14:12.923070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.696 [2024-12-13 18:14:12.923107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:38.696 [2024-12-13 18:14:12.923116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.138 ms 00:19:38.696 [2024-12-13 18:14:12.923124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.696 [2024-12-13 18:14:12.927166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.696 [2024-12-13 18:14:12.927324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:38.696 [2024-12-13 18:14:12.927356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:19:38.696 [2024-12-13 18:14:12.927369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.697 [2024-12-13 18:14:12.927489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.697 [2024-12-13 18:14:12.927500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:38.697 [2024-12-13 18:14:12.927508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:38.697 [2024-12-13 18:14:12.927517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.697 [2024-12-13 18:14:12.929761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.697 [2024-12-13 18:14:12.929797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:38.697 [2024-12-13 18:14:12.929806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:19:38.697 [2024-12-13 18:14:12.929817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.697 [2024-12-13 18:14:12.932431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.697 [2024-12-13 18:14:12.932471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:38.697 [2024-12-13 18:14:12.932479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:19:38.697 [2024-12-13 18:14:12.932488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.697 [2024-12-13 18:14:12.934468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.697 [2024-12-13 18:14:12.934587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:38.697 [2024-12-13 18:14:12.934601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:19:38.697 [2024-12-13 18:14:12.934610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.697 [2024-12-13 18:14:12.936450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.697 [2024-12-13 18:14:12.936488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:38.697 [2024-12-13 18:14:12.936496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:19:38.697 [2024-12-13 18:14:12.936505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.697 [2024-12-13 18:14:12.936539] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:38.697 [2024-12-13 18:14:12.936554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.936996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:38.697 [2024-12-13 18:14:12.937164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:38.698 [2024-12-13 18:14:12.937418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:38.698 [2024-12-13 18:14:12.937426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:19:38.698 [2024-12-13 18:14:12.937435] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:38.698 [2024-12-13 18:14:12.937444] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:38.698 [2024-12-13 18:14:12.937454] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:38.698 [2024-12-13 18:14:12.937461] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:38.698 [2024-12-13 18:14:12.937470] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:38.698 [2024-12-13 18:14:12.937479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:38.698 [2024-12-13 18:14:12.937489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:38.698 [2024-12-13 18:14:12.937496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:38.698 [2024-12-13 18:14:12.937504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:38.698 [2024-12-13 18:14:12.937511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.698 [2024-12-13 18:14:12.937520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:38.698 [2024-12-13 18:14:12.937528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:19:38.698 [2024-12-13 18:14:12.937538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.939337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.698 [2024-12-13 18:14:12.939367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:38.698 [2024-12-13 18:14:12.939376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:19:38.698 [2024-12-13 18:14:12.939385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.939473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.698 [2024-12-13 18:14:12.939482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:38.698 [2024-12-13 18:14:12.939495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:38.698 [2024-12-13 18:14:12.939507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.945307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.945345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.698 [2024-12-13 18:14:12.945354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.945364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.945445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.945456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.698 [2024-12-13 18:14:12.945464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.945475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.945518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.945529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.698 [2024-12-13 18:14:12.945537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.945545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.945563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.945572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.698 [2024-12-13 18:14:12.945580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.945588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.955661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.955708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.698 [2024-12-13 18:14:12.955719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.955734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.963349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.698 [2024-12-13 18:14:12.963360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.963372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.963427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.698 [2024-12-13 18:14:12.963435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.963449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.963493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.698 [2024-12-13 18:14:12.963500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.963509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.963587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.698 [2024-12-13 18:14:12.963599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.963611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.963655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.698 [2024-12-13 18:14:12.963663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.963673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.963726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.698 [2024-12-13 18:14:12.963736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.963745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.698 [2024-12-13 18:14:12.963803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.698 [2024-12-13 18:14:12.963811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.698 [2024-12-13 18:14:12.963819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.698 [2024-12-13 18:14:12.963952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.257 ms, result 0 00:19:38.958 18:14:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:38.958 [2024-12-13 18:14:13.202158] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:38.958 [2024-12-13 18:14:13.202421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89561 ] 00:19:39.218 [2024-12-13 18:14:13.346071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.218 [2024-12-13 18:14:13.366293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.218 [2024-12-13 18:14:13.464893] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.218 [2024-12-13 18:14:13.465182] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.480 [2024-12-13 18:14:13.625800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.625864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:39.480 [2024-12-13 18:14:13.625881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:39.480 [2024-12-13 18:14:13.625890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.628530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.628589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.480 [2024-12-13 18:14:13.628601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.619 ms 00:19:39.480 [2024-12-13 18:14:13.628608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.628728] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:39.480 [2024-12-13 18:14:13.628997] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:39.480 [2024-12-13 18:14:13.629017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.629027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.480 [2024-12-13 18:14:13.629037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:39.480 [2024-12-13 18:14:13.629045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.630905] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:39.480 [2024-12-13 18:14:13.634561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.634613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:39.480 [2024-12-13 18:14:13.634630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:19:39.480 [2024-12-13 18:14:13.634645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.634747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.634759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:39.480 [2024-12-13 18:14:13.634768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:39.480 [2024-12-13 18:14:13.634776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.642977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.643020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.480 [2024-12-13 18:14:13.643031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.153 ms 00:19:39.480 [2024-12-13 18:14:13.643039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.643175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.643190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.480 [2024-12-13 18:14:13.643199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:39.480 [2024-12-13 18:14:13.643215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.643265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.643275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:39.480 [2024-12-13 18:14:13.643284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:39.480 [2024-12-13 18:14:13.643291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.643314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:39.480 [2024-12-13 18:14:13.645214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.645287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.480 [2024-12-13 18:14:13.645298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:19:39.480 [2024-12-13 18:14:13.645311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.645358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.645371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:39.480 [2024-12-13 18:14:13.645380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:39.480 [2024-12-13 18:14:13.645387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.645408] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:39.480 [2024-12-13 18:14:13.645430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:39.480 [2024-12-13 18:14:13.645471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:39.480 [2024-12-13 18:14:13.645490] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:39.480 [2024-12-13 18:14:13.645599] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:39.480 [2024-12-13 18:14:13.645611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:39.480 [2024-12-13 18:14:13.645626] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:39.480 [2024-12-13 18:14:13.645636] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:39.480 [2024-12-13 18:14:13.645645] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:39.480 [2024-12-13 18:14:13.645657] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:39.480 [2024-12-13 18:14:13.645665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:39.480 [2024-12-13 18:14:13.645676] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:39.480 [2024-12-13 18:14:13.645685] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:39.480 [2024-12-13 18:14:13.645697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.645705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:39.480 [2024-12-13 18:14:13.645717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:39.480 [2024-12-13 18:14:13.645728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.645817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.480 [2024-12-13 18:14:13.645826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:39.480 [2024-12-13 18:14:13.645834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:39.480 [2024-12-13 18:14:13.645841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.480 [2024-12-13 18:14:13.645942] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:39.480 [2024-12-13 18:14:13.646040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:39.480 [2024-12-13 18:14:13.646053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.480 [2024-12-13 18:14:13.646062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.480 [2024-12-13 18:14:13.646071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:39.480 [2024-12-13 18:14:13.646079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:39.480 [2024-12-13 18:14:13.646087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:39.481 [2024-12-13 18:14:13.646106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.481 [2024-12-13 18:14:13.646122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:39.481 [2024-12-13 18:14:13.646130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:39.481 [2024-12-13 18:14:13.646138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.481 [2024-12-13 18:14:13.646146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:39.481 [2024-12-13 18:14:13.646154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:39.481 [2024-12-13 18:14:13.646162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:39.481 [2024-12-13 18:14:13.646177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:39.481 [2024-12-13 18:14:13.646201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:39.481 [2024-12-13 18:14:13.646230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:39.481 [2024-12-13 18:14:13.646275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:39.481 [2024-12-13 18:14:13.646300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:39.481 [2024-12-13 18:14:13.646321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.481 [2024-12-13 18:14:13.646335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:39.481 [2024-12-13 18:14:13.646342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:39.481 [2024-12-13 18:14:13.646349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.481 [2024-12-13 18:14:13.646356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:39.481 [2024-12-13 18:14:13.646363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:39.481 [2024-12-13 18:14:13.646373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:39.481 [2024-12-13 18:14:13.646387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:39.481 [2024-12-13 18:14:13.646394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646401] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:39.481 [2024-12-13 18:14:13.646408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:39.481 [2024-12-13 18:14:13.646416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.481 [2024-12-13 18:14:13.646436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:39.481 [2024-12-13 18:14:13.646442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:39.481 [2024-12-13 18:14:13.646449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:39.481 [2024-12-13 18:14:13.646455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:39.481 [2024-12-13 18:14:13.646463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:39.481 [2024-12-13 18:14:13.646470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:39.481 [2024-12-13 18:14:13.646478] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:39.481 [2024-12-13 18:14:13.646488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.481 [2024-12-13 18:14:13.646498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:39.481 [2024-12-13 18:14:13.646506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:39.481 [2024-12-13 18:14:13.646512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:39.481 [2024-12-13 18:14:13.646520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:39.481 [2024-12-13 18:14:13.646528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:39.481 [2024-12-13 18:14:13.646535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:39.481 [2024-12-13 18:14:13.646544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:39.481 [2024-12-13 18:14:13.646558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:39.481 [2024-12-13 18:14:13.646565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:39.481 [2024-12-13 18:14:13.646573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:39.481 [2024-12-13 18:14:13.646580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:39.481 [2024-12-13 18:14:13.646588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:39.481 [2024-12-13 18:14:13.646595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:39.481 [2024-12-13 18:14:13.646603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:39.481 [2024-12-13 18:14:13.646611] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:39.481 [2024-12-13 18:14:13.646622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.481 [2024-12-13 18:14:13.646633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:39.481 [2024-12-13 18:14:13.646640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:39.481 [2024-12-13 18:14:13.646648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:39.481 [2024-12-13 18:14:13.646655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:39.481 [2024-12-13 18:14:13.646663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.646670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:39.481 [2024-12-13 18:14:13.646678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:19:39.481 [2024-12-13 18:14:13.646685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.659621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.659665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.481 [2024-12-13 18:14:13.659677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.881 ms 00:19:39.481 [2024-12-13 18:14:13.659684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.659812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.659833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.481 [2024-12-13 18:14:13.659842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:39.481 [2024-12-13 18:14:13.659849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.682314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.682382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.481 [2024-12-13 18:14:13.682404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.436 ms 00:19:39.481 [2024-12-13 18:14:13.682418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.682562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.682583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.481 [2024-12-13 18:14:13.682599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:39.481 [2024-12-13 18:14:13.682613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.683136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.683189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.481 [2024-12-13 18:14:13.683219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:19:39.481 [2024-12-13 18:14:13.683233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.683501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.683522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.481 [2024-12-13 18:14:13.683537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:19:39.481 [2024-12-13 18:14:13.683550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.692071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.481 [2024-12-13 18:14:13.692115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.481 [2024-12-13 18:14:13.692126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.486 ms 00:19:39.481 [2024-12-13 18:14:13.692139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.481 [2024-12-13 18:14:13.695375] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:39.481 [2024-12-13 18:14:13.695423] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:39.481 [2024-12-13 18:14:13.695436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.695444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:39.482 [2024-12-13 18:14:13.695453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.169 ms 00:19:39.482 [2024-12-13 18:14:13.695460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.711145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.711193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:39.482 [2024-12-13 18:14:13.711216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.622 ms 00:19:39.482 [2024-12-13 18:14:13.711224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.714219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.714417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:39.482 [2024-12-13 18:14:13.714437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:19:39.482 [2024-12-13 18:14:13.714445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.717086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.717146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:39.482 [2024-12-13 18:14:13.717156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:19:39.482 [2024-12-13 18:14:13.717163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.717528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.717541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:39.482 [2024-12-13 18:14:13.717552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:39.482 [2024-12-13 18:14:13.717566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.741145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.741378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:39.482 [2024-12-13 18:14:13.741400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.520 ms 00:19:39.482 [2024-12-13 18:14:13.741409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.749536] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:39.482 [2024-12-13 18:14:13.767779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.767830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:39.482 [2024-12-13 18:14:13.767842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.288 ms 00:19:39.482 [2024-12-13 18:14:13.767850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.767939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.767951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:39.482 [2024-12-13 18:14:13.767964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:39.482 [2024-12-13 18:14:13.767972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.768031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.768041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:39.482 [2024-12-13 18:14:13.768050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:39.482 [2024-12-13 18:14:13.768058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.768088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.768097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:39.482 [2024-12-13 18:14:13.768105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:39.482 [2024-12-13 18:14:13.768116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.768156] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:39.482 [2024-12-13 18:14:13.768168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.768176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:39.482 [2024-12-13 18:14:13.768185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:39.482 [2024-12-13 18:14:13.768193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.773869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.773914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:39.482 [2024-12-13 18:14:13.773925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.655 ms 00:19:39.482 [2024-12-13 18:14:13.773934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.774030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.482 [2024-12-13 18:14:13.774041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:39.482 [2024-12-13 18:14:13.774050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:39.482 [2024-12-13 18:14:13.774058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.482 [2024-12-13 18:14:13.775029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:39.482 [2024-12-13 18:14:13.776365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.931 ms, result 0 00:19:39.482 [2024-12-13 18:14:13.777166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.482 [2024-12-13 18:14:13.785130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.875  [2024-12-13T18:14:16.196Z] Copying: 24/256 [MB] (24 MBps) [2024-12-13T18:14:17.140Z] Copying: 35/256 [MB] (10 MBps) [2024-12-13T18:14:18.084Z] Copying: 56/256 [MB] (20 MBps) [2024-12-13T18:14:19.027Z] Copying: 75/256 [MB] (19 MBps) [2024-12-13T18:14:19.971Z] Copying: 91/256 [MB] (15 MBps) [2024-12-13T18:14:20.913Z] Copying: 108/256 [MB] (17 MBps) [2024-12-13T18:14:21.856Z] Copying: 120/256 [MB] (12 MBps) [2024-12-13T18:14:23.243Z] Copying: 137/256 [MB] (17 MBps) [2024-12-13T18:14:24.187Z] Copying: 154/256 [MB] (16 MBps) [2024-12-13T18:14:25.131Z] Copying: 171/256 [MB] (17 MBps) [2024-12-13T18:14:26.088Z] Copying: 189/256 [MB] (17 MBps) [2024-12-13T18:14:27.088Z] Copying: 203/256 [MB] (14 MBps) [2024-12-13T18:14:28.033Z] Copying: 217/256 [MB] (14 MBps) [2024-12-13T18:14:28.975Z] Copying: 233/256 [MB] (15 MBps) [2024-12-13T18:14:29.236Z] Copying: 247/256 [MB] (14 MBps) [2024-12-13T18:14:29.498Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-13 18:14:29.319675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.121 [2024-12-13 18:14:29.321869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.321932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:55.121 [2024-12-13 18:14:29.321954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.121 [2024-12-13 18:14:29.321966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.322000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:55.121 [2024-12-13 18:14:29.322764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.323009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:55.121 [2024-12-13 18:14:29.323039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:19:55.121 [2024-12-13 18:14:29.323053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.323490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.323515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:55.121 [2024-12-13 18:14:29.323534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:19:55.121 [2024-12-13 18:14:29.323545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.327869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.327893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:55.121 [2024-12-13 18:14:29.327905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:19:55.121 [2024-12-13 18:14:29.327914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.335730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.335777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:55.121 [2024-12-13 18:14:29.335789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.792 ms 00:19:55.121 [2024-12-13 18:14:29.335805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.338696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.338750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:55.121 [2024-12-13 18:14:29.338760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:19:55.121 [2024-12-13 18:14:29.338768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.344563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.344614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:55.121 [2024-12-13 18:14:29.344637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.745 ms 00:19:55.121 [2024-12-13 18:14:29.344645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.344787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.344798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:55.121 [2024-12-13 18:14:29.344808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:55.121 [2024-12-13 18:14:29.344819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.348021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.348070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:55.121 [2024-12-13 18:14:29.348080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:19:55.121 [2024-12-13 18:14:29.348088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.350936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.350986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:55.121 [2024-12-13 18:14:29.350996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:19:55.121 [2024-12-13 18:14:29.351003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.353460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.353647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:55.121 [2024-12-13 18:14:29.353667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.412 ms 00:19:55.121 [2024-12-13 18:14:29.353674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.356004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.121 [2024-12-13 18:14:29.356057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:55.121 [2024-12-13 18:14:29.356068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:19:55.121 [2024-12-13 18:14:29.356077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.121 [2024-12-13 18:14:29.356137] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:55.121 [2024-12-13 18:14:29.356154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:55.121 [2024-12-13 18:14:29.356344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:55.122 [2024-12-13 18:14:29.356995] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:55.122 [2024-12-13 18:14:29.357003] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eca19c9b-a273-421a-bcd8-19fa71a11ad2 00:19:55.122 [2024-12-13 18:14:29.357011] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:55.122 [2024-12-13 18:14:29.357019] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:55.122 [2024-12-13 18:14:29.357027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:55.122 [2024-12-13 18:14:29.357035] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:55.122 [2024-12-13 18:14:29.357042] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:55.122 [2024-12-13 18:14:29.357050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:55.122 [2024-12-13 18:14:29.357065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:55.122 [2024-12-13 18:14:29.357072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:55.122 [2024-12-13 18:14:29.357078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:55.122 [2024-12-13 18:14:29.357086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.122 [2024-12-13 18:14:29.357094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:55.122 [2024-12-13 18:14:29.357103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:19:55.122 [2024-12-13 18:14:29.357111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.359649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.122 [2024-12-13 18:14:29.359799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:55.122 [2024-12-13 18:14:29.359862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:19:55.122 [2024-12-13 18:14:29.359895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.360036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.122 [2024-12-13 18:14:29.360193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:55.122 [2024-12-13 18:14:29.360220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:55.122 [2024-12-13 18:14:29.360256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.368006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.368171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.122 [2024-12-13 18:14:29.368227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.368275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.368391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.368418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.122 [2024-12-13 18:14:29.368438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.368457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.368521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.368600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.122 [2024-12-13 18:14:29.368627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.368647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.368681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.368702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.122 [2024-12-13 18:14:29.368721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.368802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.381721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.381911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.122 [2024-12-13 18:14:29.381967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.381999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.391659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.391827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.122 [2024-12-13 18:14:29.391844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.391854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.391886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.391895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.122 [2024-12-13 18:14:29.391904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.391913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.391946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.391962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.122 [2024-12-13 18:14:29.391970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.391984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.392060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.392071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.122 [2024-12-13 18:14:29.392089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.392097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.392143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.392155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:55.122 [2024-12-13 18:14:29.392164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.392172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.392212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.392221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.122 [2024-12-13 18:14:29.392230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.392237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.392310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.122 [2024-12-13 18:14:29.392322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.122 [2024-12-13 18:14:29.392331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.122 [2024-12-13 18:14:29.392339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.122 [2024-12-13 18:14:29.392504] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.619 ms, result 0 00:19:55.383 00:19:55.383 00:19:55.383 18:14:29 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:55.954 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:55.954 18:14:30 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:55.954 18:14:30 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:55.954 18:14:30 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:55.954 18:14:30 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.954 18:14:30 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:55.954 18:14:30 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:55.954 18:14:30 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89525 00:19:55.954 18:14:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89525 ']' 00:19:55.954 18:14:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89525 00:19:55.954 Process with pid 89525 is not found 00:19:55.954 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89525) - No such process 00:19:55.954 18:14:30 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89525 is not found' 00:19:55.954 00:19:55.954 real 1m6.452s 00:19:55.954 user 1m26.206s 00:19:55.954 sys 0m5.715s 00:19:55.954 ************************************ 00:19:55.954 END TEST ftl_trim 00:19:55.954 ************************************ 00:19:55.954 18:14:30 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.954 18:14:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:55.954 18:14:30 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:55.954 18:14:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:55.954 18:14:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.954 18:14:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:55.954 ************************************ 00:19:55.954 START TEST ftl_restore 00:19:55.954 ************************************ 00:19:55.954 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:56.216 * Looking for test storage... 00:19:56.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.216 18:14:30 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:56.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.216 --rc genhtml_branch_coverage=1 00:19:56.216 --rc genhtml_function_coverage=1 00:19:56.216 --rc genhtml_legend=1 00:19:56.216 --rc geninfo_all_blocks=1 00:19:56.216 --rc geninfo_unexecuted_blocks=1 00:19:56.216 00:19:56.216 ' 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:56.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.216 --rc genhtml_branch_coverage=1 00:19:56.216 --rc genhtml_function_coverage=1 00:19:56.216 --rc genhtml_legend=1 00:19:56.216 --rc geninfo_all_blocks=1 00:19:56.216 --rc geninfo_unexecuted_blocks=1 00:19:56.216 00:19:56.216 ' 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:56.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.216 --rc genhtml_branch_coverage=1 00:19:56.216 --rc genhtml_function_coverage=1 00:19:56.216 --rc genhtml_legend=1 00:19:56.216 --rc geninfo_all_blocks=1 00:19:56.216 --rc geninfo_unexecuted_blocks=1 00:19:56.216 00:19:56.216 ' 00:19:56.216 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:56.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.216 --rc genhtml_branch_coverage=1 00:19:56.216 --rc genhtml_function_coverage=1 00:19:56.216 --rc genhtml_legend=1 00:19:56.216 --rc geninfo_all_blocks=1 00:19:56.216 --rc geninfo_unexecuted_blocks=1 00:19:56.216 00:19:56.216 ' 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:56.216 18:14:30 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Sc5Q37d0cg 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=89804 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 89804 00:19:56.217 18:14:30 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:56.217 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 89804 ']' 00:19:56.217 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.217 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.217 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.217 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.217 18:14:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 [2024-12-13 18:14:30.565778] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:56.217 [2024-12-13 18:14:30.566152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89804 ] 00:19:56.478 [2024-12-13 18:14:30.712889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.478 [2024-12-13 18:14:30.741440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.050 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.050 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:57.050 18:14:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:57.050 18:14:31 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:57.050 18:14:31 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:57.050 18:14:31 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:57.050 18:14:31 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:57.050 18:14:31 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:57.622 18:14:31 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:57.622 18:14:31 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:57.622 18:14:31 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:57.622 { 00:19:57.622 "name": "nvme0n1", 00:19:57.622 "aliases": [ 00:19:57.622 "99a8c38e-9507-44f6-a87d-ba08a50bafea" 00:19:57.622 ], 00:19:57.622 "product_name": "NVMe disk", 00:19:57.622 "block_size": 4096, 00:19:57.622 "num_blocks": 1310720, 00:19:57.622 "uuid": "99a8c38e-9507-44f6-a87d-ba08a50bafea", 00:19:57.622 "numa_id": -1, 00:19:57.622 "assigned_rate_limits": { 00:19:57.622 "rw_ios_per_sec": 0, 00:19:57.622 "rw_mbytes_per_sec": 0, 00:19:57.622 "r_mbytes_per_sec": 0, 00:19:57.622 "w_mbytes_per_sec": 0 00:19:57.622 }, 00:19:57.622 "claimed": true, 00:19:57.622 "claim_type": "read_many_write_one", 00:19:57.622 "zoned": false, 00:19:57.622 "supported_io_types": { 00:19:57.622 "read": true, 00:19:57.622 "write": true, 00:19:57.622 "unmap": true, 00:19:57.622 "flush": true, 00:19:57.622 "reset": true, 00:19:57.622 "nvme_admin": true, 00:19:57.622 "nvme_io": true, 00:19:57.622 "nvme_io_md": false, 00:19:57.622 "write_zeroes": true, 00:19:57.622 "zcopy": false, 00:19:57.622 "get_zone_info": false, 00:19:57.622 "zone_management": false, 00:19:57.622 "zone_append": false, 00:19:57.622 "compare": true, 00:19:57.622 "compare_and_write": false, 00:19:57.622 "abort": true, 00:19:57.622 "seek_hole": false, 00:19:57.622 "seek_data": false, 00:19:57.622 "copy": true, 00:19:57.622 "nvme_iov_md": false 00:19:57.622 }, 00:19:57.622 "driver_specific": { 00:19:57.622 "nvme": [ 00:19:57.622 { 00:19:57.622 "pci_address": "0000:00:11.0", 00:19:57.622 "trid": { 00:19:57.622 "trtype": "PCIe", 00:19:57.622 "traddr": "0000:00:11.0" 00:19:57.622 }, 00:19:57.622 "ctrlr_data": { 00:19:57.622 "cntlid": 0, 00:19:57.622 "vendor_id": "0x1b36", 00:19:57.622 "model_number": "QEMU NVMe Ctrl", 00:19:57.622 "serial_number": "12341", 00:19:57.622 "firmware_revision": "8.0.0", 00:19:57.622 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:57.622 "oacs": { 00:19:57.622 "security": 0, 00:19:57.622 "format": 1, 00:19:57.622 "firmware": 0, 00:19:57.622 "ns_manage": 1 00:19:57.622 }, 00:19:57.622 "multi_ctrlr": false, 00:19:57.622 "ana_reporting": false 00:19:57.622 }, 00:19:57.622 "vs": { 00:19:57.622 "nvme_version": "1.4" 00:19:57.622 }, 00:19:57.622 "ns_data": { 00:19:57.622 "id": 1, 00:19:57.622 "can_share": false 00:19:57.622 } 00:19:57.622 } 00:19:57.622 ], 00:19:57.622 "mp_policy": "active_passive" 00:19:57.622 } 00:19:57.622 } 00:19:57.622 ]' 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:57.622 18:14:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:57.883 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:57.883 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:57.883 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=0165edb5-731e-46a6-a4af-eaa2b79cf9d8 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:57.883 18:14:32 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0165edb5-731e-46a6-a4af-eaa2b79cf9d8 00:19:58.145 18:14:32 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:58.406 18:14:32 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=e825008a-9cec-4800-8c89-7b5872d92337 00:19:58.406 18:14:32 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e825008a-9cec-4800-8c89-7b5872d92337 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=6e516ede-c050-494c-a8b8-33df3371568c 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6e516ede-c050-494c-a8b8-33df3371568c 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=6e516ede-c050-494c-a8b8-33df3371568c 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:58.667 18:14:32 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 6e516ede-c050-494c-a8b8-33df3371568c 00:19:58.667 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6e516ede-c050-494c-a8b8-33df3371568c 00:19:58.667 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:58.667 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:58.667 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:58.667 18:14:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e516ede-c050-494c-a8b8-33df3371568c 00:19:58.928 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:58.928 { 00:19:58.928 "name": "6e516ede-c050-494c-a8b8-33df3371568c", 00:19:58.928 "aliases": [ 00:19:58.928 "lvs/nvme0n1p0" 00:19:58.928 ], 00:19:58.928 "product_name": "Logical Volume", 00:19:58.928 "block_size": 4096, 00:19:58.928 "num_blocks": 26476544, 00:19:58.928 "uuid": "6e516ede-c050-494c-a8b8-33df3371568c", 00:19:58.928 "assigned_rate_limits": { 00:19:58.928 "rw_ios_per_sec": 0, 00:19:58.928 "rw_mbytes_per_sec": 0, 00:19:58.928 "r_mbytes_per_sec": 0, 00:19:58.928 "w_mbytes_per_sec": 0 00:19:58.928 }, 00:19:58.928 "claimed": false, 00:19:58.928 "zoned": false, 00:19:58.928 "supported_io_types": { 00:19:58.928 "read": true, 00:19:58.928 "write": true, 00:19:58.928 "unmap": true, 00:19:58.928 "flush": false, 00:19:58.928 "reset": true, 00:19:58.928 "nvme_admin": false, 00:19:58.928 "nvme_io": false, 00:19:58.928 "nvme_io_md": false, 00:19:58.928 "write_zeroes": true, 00:19:58.928 "zcopy": false, 00:19:58.928 "get_zone_info": false, 00:19:58.928 "zone_management": false, 00:19:58.928 "zone_append": false, 00:19:58.928 "compare": false, 00:19:58.928 "compare_and_write": false, 00:19:58.928 "abort": false, 00:19:58.928 "seek_hole": true, 00:19:58.928 "seek_data": true, 00:19:58.928 "copy": false, 00:19:58.928 "nvme_iov_md": false 00:19:58.928 }, 00:19:58.928 "driver_specific": { 00:19:58.928 "lvol": { 00:19:58.928 "lvol_store_uuid": "e825008a-9cec-4800-8c89-7b5872d92337", 00:19:58.928 "base_bdev": "nvme0n1", 00:19:58.928 "thin_provision": true, 00:19:58.928 "num_allocated_clusters": 0, 00:19:58.928 "snapshot": false, 00:19:58.928 "clone": false, 00:19:58.928 "esnap_clone": false 00:19:58.928 } 00:19:58.928 } 00:19:58.928 } 00:19:58.928 ]' 00:19:58.928 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:58.928 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:58.928 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:58.928 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:58.928 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:58.928 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:58.928 18:14:33 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:58.928 18:14:33 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:58.928 18:14:33 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:59.189 18:14:33 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:59.189 18:14:33 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:59.189 18:14:33 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 6e516ede-c050-494c-a8b8-33df3371568c 00:19:59.189 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6e516ede-c050-494c-a8b8-33df3371568c 00:19:59.189 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:59.189 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:59.189 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:59.189 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e516ede-c050-494c-a8b8-33df3371568c 00:19:59.451 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:59.451 { 00:19:59.451 "name": "6e516ede-c050-494c-a8b8-33df3371568c", 00:19:59.451 "aliases": [ 00:19:59.451 "lvs/nvme0n1p0" 00:19:59.451 ], 00:19:59.451 "product_name": "Logical Volume", 00:19:59.451 "block_size": 4096, 00:19:59.451 "num_blocks": 26476544, 00:19:59.451 "uuid": "6e516ede-c050-494c-a8b8-33df3371568c", 00:19:59.451 "assigned_rate_limits": { 00:19:59.451 "rw_ios_per_sec": 0, 00:19:59.451 "rw_mbytes_per_sec": 0, 00:19:59.451 "r_mbytes_per_sec": 0, 00:19:59.451 "w_mbytes_per_sec": 0 00:19:59.451 }, 00:19:59.451 "claimed": false, 00:19:59.451 "zoned": false, 00:19:59.451 "supported_io_types": { 00:19:59.451 "read": true, 00:19:59.451 "write": true, 00:19:59.451 "unmap": true, 00:19:59.451 "flush": false, 00:19:59.451 "reset": true, 00:19:59.451 "nvme_admin": false, 00:19:59.451 "nvme_io": false, 00:19:59.451 "nvme_io_md": false, 00:19:59.451 "write_zeroes": true, 00:19:59.451 "zcopy": false, 00:19:59.451 "get_zone_info": false, 00:19:59.451 "zone_management": false, 00:19:59.451 "zone_append": false, 00:19:59.451 "compare": false, 00:19:59.451 "compare_and_write": false, 00:19:59.451 "abort": false, 00:19:59.451 "seek_hole": true, 00:19:59.451 "seek_data": true, 00:19:59.451 "copy": false, 00:19:59.451 "nvme_iov_md": false 00:19:59.451 }, 00:19:59.451 "driver_specific": { 00:19:59.451 "lvol": { 00:19:59.451 "lvol_store_uuid": "e825008a-9cec-4800-8c89-7b5872d92337", 00:19:59.451 "base_bdev": "nvme0n1", 00:19:59.451 "thin_provision": true, 00:19:59.451 "num_allocated_clusters": 0, 00:19:59.451 "snapshot": false, 00:19:59.451 "clone": false, 00:19:59.451 "esnap_clone": false 00:19:59.452 } 00:19:59.452 } 00:19:59.452 } 00:19:59.452 ]' 00:19:59.452 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:59.452 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:59.452 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:59.452 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:59.452 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:59.452 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:59.452 18:14:33 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:59.452 18:14:33 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:59.712 18:14:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:59.712 18:14:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 6e516ede-c050-494c-a8b8-33df3371568c 00:19:59.712 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=6e516ede-c050-494c-a8b8-33df3371568c 00:19:59.712 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:59.712 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:59.712 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:59.712 18:14:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e516ede-c050-494c-a8b8-33df3371568c 00:19:59.972 18:14:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:59.972 { 00:19:59.972 "name": "6e516ede-c050-494c-a8b8-33df3371568c", 00:19:59.972 "aliases": [ 00:19:59.972 "lvs/nvme0n1p0" 00:19:59.972 ], 00:19:59.972 "product_name": "Logical Volume", 00:19:59.972 "block_size": 4096, 00:19:59.972 "num_blocks": 26476544, 00:19:59.972 "uuid": "6e516ede-c050-494c-a8b8-33df3371568c", 00:19:59.972 "assigned_rate_limits": { 00:19:59.972 "rw_ios_per_sec": 0, 00:19:59.972 "rw_mbytes_per_sec": 0, 00:19:59.972 "r_mbytes_per_sec": 0, 00:19:59.972 "w_mbytes_per_sec": 0 00:19:59.972 }, 00:19:59.972 "claimed": false, 00:19:59.972 "zoned": false, 00:19:59.972 "supported_io_types": { 00:19:59.972 "read": true, 00:19:59.972 "write": true, 00:19:59.972 "unmap": true, 00:19:59.972 "flush": false, 00:19:59.972 "reset": true, 00:19:59.972 "nvme_admin": false, 00:19:59.972 "nvme_io": false, 00:19:59.972 "nvme_io_md": false, 00:19:59.972 "write_zeroes": true, 00:19:59.972 "zcopy": false, 00:19:59.972 "get_zone_info": false, 00:19:59.972 "zone_management": false, 00:19:59.972 "zone_append": false, 00:19:59.972 "compare": false, 00:19:59.972 "compare_and_write": false, 00:19:59.972 "abort": false, 00:19:59.972 "seek_hole": true, 00:19:59.972 "seek_data": true, 00:19:59.972 "copy": false, 00:19:59.972 "nvme_iov_md": false 00:19:59.972 }, 00:19:59.972 "driver_specific": { 00:19:59.972 "lvol": { 00:19:59.972 "lvol_store_uuid": "e825008a-9cec-4800-8c89-7b5872d92337", 00:19:59.972 "base_bdev": "nvme0n1", 00:19:59.972 "thin_provision": true, 00:19:59.972 "num_allocated_clusters": 0, 00:19:59.972 "snapshot": false, 00:19:59.972 "clone": false, 00:19:59.972 "esnap_clone": false 00:19:59.972 } 00:19:59.972 } 00:19:59.972 } 00:19:59.972 ]' 00:19:59.972 18:14:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:59.972 18:14:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:59.972 18:14:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:59.972 18:14:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:59.972 18:14:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:59.972 18:14:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:59.972 18:14:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:59.972 18:14:34 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6e516ede-c050-494c-a8b8-33df3371568c --l2p_dram_limit 10' 00:19:59.972 18:14:34 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:59.972 18:14:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:59.972 18:14:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:59.972 18:14:34 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:59.972 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:59.972 18:14:34 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6e516ede-c050-494c-a8b8-33df3371568c --l2p_dram_limit 10 -c nvc0n1p0 00:20:00.234 [2024-12-13 18:14:34.413230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.413282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.234 [2024-12-13 18:14:34.413293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.234 [2024-12-13 18:14:34.413301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.413345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.413354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.234 [2024-12-13 18:14:34.413362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:00.234 [2024-12-13 18:14:34.413371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.413386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.234 [2024-12-13 18:14:34.413594] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.234 [2024-12-13 18:14:34.413606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.413615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.234 [2024-12-13 18:14:34.413621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:00.234 [2024-12-13 18:14:34.413629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.413651] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e17aebad-70d1-4ddb-8f87-3aaa225d11f2 00:20:00.234 [2024-12-13 18:14:34.414926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.414961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:00.234 [2024-12-13 18:14:34.414976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:00.234 [2024-12-13 18:14:34.414983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.419709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.419828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.234 [2024-12-13 18:14:34.419843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.629 ms 00:20:00.234 [2024-12-13 18:14:34.419853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.419916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.419923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.234 [2024-12-13 18:14:34.419931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:00.234 [2024-12-13 18:14:34.419936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.419969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.419979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.234 [2024-12-13 18:14:34.419986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:00.234 [2024-12-13 18:14:34.419992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.420011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.234 [2024-12-13 18:14:34.421289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.421315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.234 [2024-12-13 18:14:34.421322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:20:00.234 [2024-12-13 18:14:34.421329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.421355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.421363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.234 [2024-12-13 18:14:34.421373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:00.234 [2024-12-13 18:14:34.421383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.421404] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:00.234 [2024-12-13 18:14:34.421525] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.234 [2024-12-13 18:14:34.421534] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.234 [2024-12-13 18:14:34.421543] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.234 [2024-12-13 18:14:34.421551] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421561] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421567] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:00.234 [2024-12-13 18:14:34.421576] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.234 [2024-12-13 18:14:34.421581] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.234 [2024-12-13 18:14:34.421588] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.234 [2024-12-13 18:14:34.421593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.421601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.234 [2024-12-13 18:14:34.421607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:20:00.234 [2024-12-13 18:14:34.421613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.421678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.234 [2024-12-13 18:14:34.421687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.234 [2024-12-13 18:14:34.421693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:00.234 [2024-12-13 18:14:34.421703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.234 [2024-12-13 18:14:34.421777] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.234 [2024-12-13 18:14:34.421785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.234 [2024-12-13 18:14:34.421791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.234 [2024-12-13 18:14:34.421811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.234 [2024-12-13 18:14:34.421828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.234 [2024-12-13 18:14:34.421839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.234 [2024-12-13 18:14:34.421846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:00.234 [2024-12-13 18:14:34.421851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.234 [2024-12-13 18:14:34.421859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.234 [2024-12-13 18:14:34.421866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:00.234 [2024-12-13 18:14:34.421873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.234 [2024-12-13 18:14:34.421884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.234 [2024-12-13 18:14:34.421901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.234 [2024-12-13 18:14:34.421918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.234 [2024-12-13 18:14:34.421936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.234 [2024-12-13 18:14:34.421957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.234 [2024-12-13 18:14:34.421969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.234 [2024-12-13 18:14:34.421975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:00.234 [2024-12-13 18:14:34.421982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.234 [2024-12-13 18:14:34.421987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.235 [2024-12-13 18:14:34.421995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:00.235 [2024-12-13 18:14:34.422001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.235 [2024-12-13 18:14:34.422007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.235 [2024-12-13 18:14:34.422013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:00.235 [2024-12-13 18:14:34.422020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.235 [2024-12-13 18:14:34.422025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.235 [2024-12-13 18:14:34.422032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:00.235 [2024-12-13 18:14:34.422038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.235 [2024-12-13 18:14:34.422045] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.235 [2024-12-13 18:14:34.422055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.235 [2024-12-13 18:14:34.422064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.235 [2024-12-13 18:14:34.422071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.235 [2024-12-13 18:14:34.422079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.235 [2024-12-13 18:14:34.422084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.235 [2024-12-13 18:14:34.422093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.235 [2024-12-13 18:14:34.422098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.235 [2024-12-13 18:14:34.422105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.235 [2024-12-13 18:14:34.422111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.235 [2024-12-13 18:14:34.422119] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.235 [2024-12-13 18:14:34.422128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.235 [2024-12-13 18:14:34.422137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:00.235 [2024-12-13 18:14:34.422143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:00.235 [2024-12-13 18:14:34.422151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:00.235 [2024-12-13 18:14:34.422157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:00.235 [2024-12-13 18:14:34.422164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:00.235 [2024-12-13 18:14:34.422170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:00.235 [2024-12-13 18:14:34.422179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:00.235 [2024-12-13 18:14:34.422185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:00.235 [2024-12-13 18:14:34.422193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:00.235 [2024-12-13 18:14:34.422199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:00.235 [2024-12-13 18:14:34.422206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:00.235 [2024-12-13 18:14:34.422212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:00.235 [2024-12-13 18:14:34.422220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:00.235 [2024-12-13 18:14:34.422226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:00.235 [2024-12-13 18:14:34.422233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.235 [2024-12-13 18:14:34.422252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.235 [2024-12-13 18:14:34.422260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.235 [2024-12-13 18:14:34.422267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.235 [2024-12-13 18:14:34.422276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.235 [2024-12-13 18:14:34.422403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.235 [2024-12-13 18:14:34.422411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.235 [2024-12-13 18:14:34.422418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.235 [2024-12-13 18:14:34.422428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:20:00.235 [2024-12-13 18:14:34.422434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.235 [2024-12-13 18:14:34.422470] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:00.235 [2024-12-13 18:14:34.422478] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:04.447 [2024-12-13 18:14:38.114389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.114483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:04.447 [2024-12-13 18:14:38.114505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3691.898 ms 00:20:04.447 [2024-12-13 18:14:38.114514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.128946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.129004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.447 [2024-12-13 18:14:38.129027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.302 ms 00:20:04.447 [2024-12-13 18:14:38.129037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.129149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.129160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:04.447 [2024-12-13 18:14:38.129171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:04.447 [2024-12-13 18:14:38.129180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.141861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.142063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.447 [2024-12-13 18:14:38.142179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.621 ms 00:20:04.447 [2024-12-13 18:14:38.142210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.142279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.142355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.447 [2024-12-13 18:14:38.142384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.447 [2024-12-13 18:14:38.142404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.143213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.143382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.447 [2024-12-13 18:14:38.143448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:20:04.447 [2024-12-13 18:14:38.143473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.143620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.143799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.447 [2024-12-13 18:14:38.143829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:04.447 [2024-12-13 18:14:38.143850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.152102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.152280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.447 [2024-12-13 18:14:38.152303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.213 ms 00:20:04.447 [2024-12-13 18:14:38.152311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.171186] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:04.447 [2024-12-13 18:14:38.175733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.175801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:04.447 [2024-12-13 18:14:38.175819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.321 ms 00:20:04.447 [2024-12-13 18:14:38.175833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.261812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.261885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:04.447 [2024-12-13 18:14:38.261904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.922 ms 00:20:04.447 [2024-12-13 18:14:38.261918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.262135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.262150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:04.447 [2024-12-13 18:14:38.262160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:20:04.447 [2024-12-13 18:14:38.262170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.268161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.268223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:04.447 [2024-12-13 18:14:38.268238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.951 ms 00:20:04.447 [2024-12-13 18:14:38.268271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.273418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.273476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:04.447 [2024-12-13 18:14:38.273487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.095 ms 00:20:04.447 [2024-12-13 18:14:38.273497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.273836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.273850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:04.447 [2024-12-13 18:14:38.273860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:04.447 [2024-12-13 18:14:38.273872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.321523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.447 [2024-12-13 18:14:38.321593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:04.447 [2024-12-13 18:14:38.321610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.629 ms 00:20:04.447 [2024-12-13 18:14:38.321621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.447 [2024-12-13 18:14:38.329056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.329117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:04.448 [2024-12-13 18:14:38.329129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.357 ms 00:20:04.448 [2024-12-13 18:14:38.329141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.335235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.335309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:04.448 [2024-12-13 18:14:38.335320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.043 ms 00:20:04.448 [2024-12-13 18:14:38.335330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.341663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.341723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:04.448 [2024-12-13 18:14:38.341734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.283 ms 00:20:04.448 [2024-12-13 18:14:38.341748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.341802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.341815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:04.448 [2024-12-13 18:14:38.341824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:04.448 [2024-12-13 18:14:38.341834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.341939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.341952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:04.448 [2024-12-13 18:14:38.341961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:04.448 [2024-12-13 18:14:38.341973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.343197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3929.457 ms, result 0 00:20:04.448 { 00:20:04.448 "name": "ftl0", 00:20:04.448 "uuid": "e17aebad-70d1-4ddb-8f87-3aaa225d11f2" 00:20:04.448 } 00:20:04.448 18:14:38 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:04.448 18:14:38 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:04.448 18:14:38 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:04.448 18:14:38 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:04.448 [2024-12-13 18:14:38.794576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.794803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:04.448 [2024-12-13 18:14:38.795016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.448 [2024-12-13 18:14:38.795064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.795124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:04.448 [2024-12-13 18:14:38.796049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.796102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:04.448 [2024-12-13 18:14:38.796115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:20:04.448 [2024-12-13 18:14:38.796126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.796430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.796445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:04.448 [2024-12-13 18:14:38.796456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:04.448 [2024-12-13 18:14:38.796476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.799880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.799904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:04.448 [2024-12-13 18:14:38.799914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.387 ms 00:20:04.448 [2024-12-13 18:14:38.799925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.806496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.806544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:04.448 [2024-12-13 18:14:38.806556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.554 ms 00:20:04.448 [2024-12-13 18:14:38.806570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.809482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.809680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:04.448 [2024-12-13 18:14:38.809699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:20:04.448 [2024-12-13 18:14:38.809710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.815898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.815964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:04.448 [2024-12-13 18:14:38.815978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.073 ms 00:20:04.448 [2024-12-13 18:14:38.815992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.816134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.816147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:04.448 [2024-12-13 18:14:38.816160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:04.448 [2024-12-13 18:14:38.816170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.448 [2024-12-13 18:14:38.819945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.448 [2024-12-13 18:14:38.820141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:04.448 [2024-12-13 18:14:38.820159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:20:04.448 [2024-12-13 18:14:38.820169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.710 [2024-12-13 18:14:38.822892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.710 [2024-12-13 18:14:38.822954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:04.710 [2024-12-13 18:14:38.822964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:20:04.710 [2024-12-13 18:14:38.822974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.710 [2024-12-13 18:14:38.825317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.710 [2024-12-13 18:14:38.825479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:04.710 [2024-12-13 18:14:38.825542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:20:04.710 [2024-12-13 18:14:38.825569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.710 [2024-12-13 18:14:38.827786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.710 [2024-12-13 18:14:38.827954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:04.710 [2024-12-13 18:14:38.827972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:20:04.710 [2024-12-13 18:14:38.827985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.710 [2024-12-13 18:14:38.828023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:04.710 [2024-12-13 18:14:38.828041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:04.710 [2024-12-13 18:14:38.828433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.828980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:04.711 [2024-12-13 18:14:38.829058] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:04.711 [2024-12-13 18:14:38.829067] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e17aebad-70d1-4ddb-8f87-3aaa225d11f2 00:20:04.711 [2024-12-13 18:14:38.829078] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:04.711 [2024-12-13 18:14:38.829092] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:04.711 [2024-12-13 18:14:38.829102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:04.711 [2024-12-13 18:14:38.829111] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:04.711 [2024-12-13 18:14:38.829120] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:04.711 [2024-12-13 18:14:38.829130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:04.711 [2024-12-13 18:14:38.829140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:04.711 [2024-12-13 18:14:38.829147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:04.711 [2024-12-13 18:14:38.829156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:04.711 [2024-12-13 18:14:38.829163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.711 [2024-12-13 18:14:38.829172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:04.711 [2024-12-13 18:14:38.829181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:20:04.711 [2024-12-13 18:14:38.829190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.711 [2024-12-13 18:14:38.831857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.711 [2024-12-13 18:14:38.831926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:04.711 [2024-12-13 18:14:38.831949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:20:04.711 [2024-12-13 18:14:38.831975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.711 [2024-12-13 18:14:38.832121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.711 [2024-12-13 18:14:38.832145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:04.711 [2024-12-13 18:14:38.832173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:04.711 [2024-12-13 18:14:38.832199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.711 [2024-12-13 18:14:38.840680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.711 [2024-12-13 18:14:38.840858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.711 [2024-12-13 18:14:38.840918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.711 [2024-12-13 18:14:38.840943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.711 [2024-12-13 18:14:38.841029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.711 [2024-12-13 18:14:38.841054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.711 [2024-12-13 18:14:38.841074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.841094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.841189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.841221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.712 [2024-12-13 18:14:38.841259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.841365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.841406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.841436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.712 [2024-12-13 18:14:38.841458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.841479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.855608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.855780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.712 [2024-12-13 18:14:38.855838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.855867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.866956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.867120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.712 [2024-12-13 18:14:38.867173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.867200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.867315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.867349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.712 [2024-12-13 18:14:38.867370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.867391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.867533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.867560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.712 [2024-12-13 18:14:38.867580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.867602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.867696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.867780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.712 [2024-12-13 18:14:38.867801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.867822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.867868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.868139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:04.712 [2024-12-13 18:14:38.868199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.868224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.868300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.868336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.712 [2024-12-13 18:14:38.868357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.868378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.868455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.712 [2024-12-13 18:14:38.868490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.712 [2024-12-13 18:14:38.868609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.712 [2024-12-13 18:14:38.868634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-12-13 18:14:38.868796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.188 ms, result 0 00:20:04.712 true 00:20:04.712 18:14:38 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 89804 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89804 ']' 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89804 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89804 00:20:04.712 killing process with pid 89804 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89804' 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 89804 00:20:04.712 18:14:38 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 89804 00:20:10.004 18:14:43 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:14.211 262144+0 records in 00:20:14.211 262144+0 records out 00:20:14.211 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.39561 s, 244 MB/s 00:20:14.211 18:14:48 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:16.127 18:14:50 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.127 [2024-12-13 18:14:50.294430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:20:16.127 [2024-12-13 18:14:50.294546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90023 ] 00:20:16.127 [2024-12-13 18:14:50.435982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.127 [2024-12-13 18:14:50.455415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.389 [2024-12-13 18:14:50.538442] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.389 [2024-12-13 18:14:50.538643] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.389 [2024-12-13 18:14:50.684195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.684230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.389 [2024-12-13 18:14:50.684241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.389 [2024-12-13 18:14:50.684260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.684296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.684307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.389 [2024-12-13 18:14:50.684313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:16.389 [2024-12-13 18:14:50.684322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.684340] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.389 [2024-12-13 18:14:50.684517] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.389 [2024-12-13 18:14:50.684528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.684536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.389 [2024-12-13 18:14:50.684546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:16.389 [2024-12-13 18:14:50.684552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.685469] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.389 [2024-12-13 18:14:50.687591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.687621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.389 [2024-12-13 18:14:50.687629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:20:16.389 [2024-12-13 18:14:50.687640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.687681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.687689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.389 [2024-12-13 18:14:50.687695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:16.389 [2024-12-13 18:14:50.687702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.692183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.692208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.389 [2024-12-13 18:14:50.692219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.454 ms 00:20:16.389 [2024-12-13 18:14:50.692225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.692300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.692308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.389 [2024-12-13 18:14:50.692315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:16.389 [2024-12-13 18:14:50.692320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.692361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.692369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.389 [2024-12-13 18:14:50.692376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.389 [2024-12-13 18:14:50.692387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.692403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:16.389 [2024-12-13 18:14:50.693675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.693744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.389 [2024-12-13 18:14:50.693788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:20:16.389 [2024-12-13 18:14:50.693805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.693841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.389 [2024-12-13 18:14:50.693892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.389 [2024-12-13 18:14:50.693910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:16.389 [2024-12-13 18:14:50.693929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.389 [2024-12-13 18:14:50.693994] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.389 [2024-12-13 18:14:50.694023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.390 [2024-12-13 18:14:50.694071] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.390 [2024-12-13 18:14:50.694258] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.390 [2024-12-13 18:14:50.694357] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.390 [2024-12-13 18:14:50.694417] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.390 [2024-12-13 18:14:50.694446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.390 [2024-12-13 18:14:50.694471] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.390 [2024-12-13 18:14:50.694524] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.390 [2024-12-13 18:14:50.694547] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:16.390 [2024-12-13 18:14:50.694563] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.390 [2024-12-13 18:14:50.694577] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.390 [2024-12-13 18:14:50.694592] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.390 [2024-12-13 18:14:50.694638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-12-13 18:14:50.694656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.390 [2024-12-13 18:14:50.694671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:20:16.390 [2024-12-13 18:14:50.694685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-12-13 18:14:50.694768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-12-13 18:14:50.694819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.390 [2024-12-13 18:14:50.694834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:16.390 [2024-12-13 18:14:50.694849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-12-13 18:14:50.694932] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.390 [2024-12-13 18:14:50.694979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.390 [2024-12-13 18:14:50.694997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.390 [2024-12-13 18:14:50.695044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.390 [2024-12-13 18:14:50.695115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.390 [2024-12-13 18:14:50.695143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.390 [2024-12-13 18:14:50.695183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:16.390 [2024-12-13 18:14:50.695202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.390 [2024-12-13 18:14:50.695216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.390 [2024-12-13 18:14:50.695231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:16.390 [2024-12-13 18:14:50.695258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.390 [2024-12-13 18:14:50.695315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.390 [2024-12-13 18:14:50.695357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.390 [2024-12-13 18:14:50.695427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.390 [2024-12-13 18:14:50.695465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.390 [2024-12-13 18:14:50.695485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.390 [2024-12-13 18:14:50.695501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.390 [2024-12-13 18:14:50.695510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.390 [2024-12-13 18:14:50.695516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:16.390 [2024-12-13 18:14:50.695521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.390 [2024-12-13 18:14:50.695526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.390 [2024-12-13 18:14:50.695531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:16.390 [2024-12-13 18:14:50.695536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.390 [2024-12-13 18:14:50.695546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:16.390 [2024-12-13 18:14:50.695551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695556] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.390 [2024-12-13 18:14:50.695565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.390 [2024-12-13 18:14:50.695571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.390 [2024-12-13 18:14:50.695582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.390 [2024-12-13 18:14:50.695587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.390 [2024-12-13 18:14:50.695592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.390 [2024-12-13 18:14:50.695597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.390 [2024-12-13 18:14:50.695602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.390 [2024-12-13 18:14:50.695607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.390 [2024-12-13 18:14:50.695614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.390 [2024-12-13 18:14:50.695621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.390 [2024-12-13 18:14:50.695628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:16.390 [2024-12-13 18:14:50.695634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:16.390 [2024-12-13 18:14:50.695639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:16.390 [2024-12-13 18:14:50.695644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:16.390 [2024-12-13 18:14:50.695650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:16.390 [2024-12-13 18:14:50.695658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:16.390 [2024-12-13 18:14:50.695663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:16.390 [2024-12-13 18:14:50.695669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:16.390 [2024-12-13 18:14:50.695675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:16.390 [2024-12-13 18:14:50.695684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:16.390 [2024-12-13 18:14:50.695689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:16.390 [2024-12-13 18:14:50.695695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:16.390 [2024-12-13 18:14:50.695700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:16.390 [2024-12-13 18:14:50.695705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:16.390 [2024-12-13 18:14:50.695711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.390 [2024-12-13 18:14:50.695720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.390 [2024-12-13 18:14:50.695726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.390 [2024-12-13 18:14:50.695732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.390 [2024-12-13 18:14:50.695737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.390 [2024-12-13 18:14:50.695742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.390 [2024-12-13 18:14:50.695748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-12-13 18:14:50.695755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.390 [2024-12-13 18:14:50.695761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:20:16.390 [2024-12-13 18:14:50.695770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.390 [2024-12-13 18:14:50.703880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.390 [2024-12-13 18:14:50.703907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.390 [2024-12-13 18:14:50.703917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.075 ms 00:20:16.391 [2024-12-13 18:14:50.703923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.703984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.703990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.391 [2024-12-13 18:14:50.703996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:16.391 [2024-12-13 18:14:50.704005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.733510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.733553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.391 [2024-12-13 18:14:50.733565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.465 ms 00:20:16.391 [2024-12-13 18:14:50.733576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.733615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.733625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.391 [2024-12-13 18:14:50.733633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.391 [2024-12-13 18:14:50.733641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.733982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.734004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.391 [2024-12-13 18:14:50.734013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:16.391 [2024-12-13 18:14:50.734020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.734139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.734152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.391 [2024-12-13 18:14:50.734164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:16.391 [2024-12-13 18:14:50.734174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.739154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.739183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.391 [2024-12-13 18:14:50.739192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.961 ms 00:20:16.391 [2024-12-13 18:14:50.739199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.741859] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:16.391 [2024-12-13 18:14:50.741895] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:16.391 [2024-12-13 18:14:50.741912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.741920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:16.391 [2024-12-13 18:14:50.741927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:20:16.391 [2024-12-13 18:14:50.741934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.756654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.756785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:16.391 [2024-12-13 18:14:50.756801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.683 ms 00:20:16.391 [2024-12-13 18:14:50.756813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.758909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.758940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:16.391 [2024-12-13 18:14:50.758949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:20:16.391 [2024-12-13 18:14:50.758956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.760941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.760973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:16.391 [2024-12-13 18:14:50.760982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:20:16.391 [2024-12-13 18:14:50.760988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.391 [2024-12-13 18:14:50.761360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.391 [2024-12-13 18:14:50.761381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.391 [2024-12-13 18:14:50.761393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:20:16.391 [2024-12-13 18:14:50.761401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.778099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.778147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:16.652 [2024-12-13 18:14:50.778157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.683 ms 00:20:16.652 [2024-12-13 18:14:50.778165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.785521] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:16.652 [2024-12-13 18:14:50.787851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.787989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.652 [2024-12-13 18:14:50.788009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.651 ms 00:20:16.652 [2024-12-13 18:14:50.788017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.788088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.788098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:16.652 [2024-12-13 18:14:50.788112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:16.652 [2024-12-13 18:14:50.788125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.788190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.788200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.652 [2024-12-13 18:14:50.788207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:16.652 [2024-12-13 18:14:50.788217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.788235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.788255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.652 [2024-12-13 18:14:50.788263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.652 [2024-12-13 18:14:50.788274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.788305] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:16.652 [2024-12-13 18:14:50.788315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.788322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:16.652 [2024-12-13 18:14:50.788330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:16.652 [2024-12-13 18:14:50.788338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.791751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.791783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.652 [2024-12-13 18:14:50.791793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:20:16.652 [2024-12-13 18:14:50.791801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.791864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.652 [2024-12-13 18:14:50.791880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.652 [2024-12-13 18:14:50.791888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:16.652 [2024-12-13 18:14:50.791895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.652 [2024-12-13 18:14:50.792797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.197 ms, result 0 00:20:17.596  [2024-12-13T18:14:52.919Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-13T18:14:53.863Z] Copying: 34/1024 [MB] (14 MBps) [2024-12-13T18:14:54.807Z] Copying: 49/1024 [MB] (14 MBps) [2024-12-13T18:14:56.196Z] Copying: 63/1024 [MB] (13 MBps) [2024-12-13T18:14:57.152Z] Copying: 80/1024 [MB] (17 MBps) [2024-12-13T18:14:58.146Z] Copying: 106/1024 [MB] (25 MBps) [2024-12-13T18:14:59.092Z] Copying: 122/1024 [MB] (16 MBps) [2024-12-13T18:15:00.036Z] Copying: 138/1024 [MB] (16 MBps) [2024-12-13T18:15:00.982Z] Copying: 157/1024 [MB] (18 MBps) [2024-12-13T18:15:01.926Z] Copying: 184/1024 [MB] (26 MBps) [2024-12-13T18:15:02.868Z] Copying: 220/1024 [MB] (35 MBps) [2024-12-13T18:15:03.813Z] Copying: 236/1024 [MB] (16 MBps) [2024-12-13T18:15:05.203Z] Copying: 263/1024 [MB] (27 MBps) [2024-12-13T18:15:06.147Z] Copying: 286/1024 [MB] (22 MBps) [2024-12-13T18:15:07.091Z] Copying: 299/1024 [MB] (13 MBps) [2024-12-13T18:15:08.035Z] Copying: 316/1024 [MB] (16 MBps) [2024-12-13T18:15:08.981Z] Copying: 332/1024 [MB] (16 MBps) [2024-12-13T18:15:09.925Z] Copying: 349/1024 [MB] (16 MBps) [2024-12-13T18:15:10.870Z] Copying: 369/1024 [MB] (20 MBps) [2024-12-13T18:15:11.815Z] Copying: 387/1024 [MB] (18 MBps) [2024-12-13T18:15:13.202Z] Copying: 408/1024 [MB] (20 MBps) [2024-12-13T18:15:14.145Z] Copying: 429/1024 [MB] (21 MBps) [2024-12-13T18:15:15.090Z] Copying: 448/1024 [MB] (19 MBps) [2024-12-13T18:15:16.034Z] Copying: 465/1024 [MB] (17 MBps) [2024-12-13T18:15:16.977Z] Copying: 480/1024 [MB] (14 MBps) [2024-12-13T18:15:17.920Z] Copying: 497/1024 [MB] (17 MBps) [2024-12-13T18:15:18.865Z] Copying: 516/1024 [MB] (18 MBps) [2024-12-13T18:15:19.835Z] Copying: 530/1024 [MB] (14 MBps) [2024-12-13T18:15:21.221Z] Copying: 548/1024 [MB] (17 MBps) [2024-12-13T18:15:22.164Z] Copying: 566/1024 [MB] (18 MBps) [2024-12-13T18:15:23.109Z] Copying: 579/1024 [MB] (13 MBps) [2024-12-13T18:15:24.053Z] Copying: 595/1024 [MB] (15 MBps) [2024-12-13T18:15:24.998Z] Copying: 611/1024 [MB] (15 MBps) [2024-12-13T18:15:25.945Z] Copying: 627/1024 [MB] (15 MBps) [2024-12-13T18:15:26.889Z] Copying: 643/1024 [MB] (16 MBps) [2024-12-13T18:15:27.832Z] Copying: 654/1024 [MB] (10 MBps) [2024-12-13T18:15:29.219Z] Copying: 665/1024 [MB] (10 MBps) [2024-12-13T18:15:30.163Z] Copying: 675/1024 [MB] (10 MBps) [2024-12-13T18:15:31.107Z] Copying: 695/1024 [MB] (20 MBps) [2024-12-13T18:15:32.052Z] Copying: 737/1024 [MB] (41 MBps) [2024-12-13T18:15:32.995Z] Copying: 752/1024 [MB] (14 MBps) [2024-12-13T18:15:33.938Z] Copying: 772/1024 [MB] (19 MBps) [2024-12-13T18:15:34.896Z] Copying: 790/1024 [MB] (17 MBps) [2024-12-13T18:15:35.840Z] Copying: 807/1024 [MB] (16 MBps) [2024-12-13T18:15:37.229Z] Copying: 821/1024 [MB] (14 MBps) [2024-12-13T18:15:38.173Z] Copying: 839/1024 [MB] (17 MBps) [2024-12-13T18:15:39.116Z] Copying: 852/1024 [MB] (12 MBps) [2024-12-13T18:15:40.059Z] Copying: 871/1024 [MB] (19 MBps) [2024-12-13T18:15:41.002Z] Copying: 899/1024 [MB] (27 MBps) [2024-12-13T18:15:41.945Z] Copying: 916/1024 [MB] (16 MBps) [2024-12-13T18:15:42.920Z] Copying: 932/1024 [MB] (16 MBps) [2024-12-13T18:15:43.865Z] Copying: 952/1024 [MB] (19 MBps) [2024-12-13T18:15:44.808Z] Copying: 980/1024 [MB] (28 MBps) [2024-12-13T18:15:45.381Z] Copying: 1017/1024 [MB] (36 MBps) [2024-12-13T18:15:45.381Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-13 18:15:45.086762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.086808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.004 [2024-12-13 18:15:45.086821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:11.004 [2024-12-13 18:15:45.086836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.086856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:11.004 [2024-12-13 18:15:45.087400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.087419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.004 [2024-12-13 18:15:45.087428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:21:11.004 [2024-12-13 18:15:45.087442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.089424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.089457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.004 [2024-12-13 18:15:45.089466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:21:11.004 [2024-12-13 18:15:45.089481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.105218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.105380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.004 [2024-12-13 18:15:45.105398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.720 ms 00:21:11.004 [2024-12-13 18:15:45.105406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.111574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.111603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.004 [2024-12-13 18:15:45.111613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:21:11.004 [2024-12-13 18:15:45.111620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.113950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.113986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.004 [2024-12-13 18:15:45.113997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:21:11.004 [2024-12-13 18:15:45.114005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.118082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.118122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.004 [2024-12-13 18:15:45.118132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.044 ms 00:21:11.004 [2024-12-13 18:15:45.118139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.118270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.118281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.004 [2024-12-13 18:15:45.118289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:11.004 [2024-12-13 18:15:45.118297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.120972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.121102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:11.004 [2024-12-13 18:15:45.121118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:21:11.004 [2024-12-13 18:15:45.121125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.123325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.123353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:11.004 [2024-12-13 18:15:45.123361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:21:11.004 [2024-12-13 18:15:45.123368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.125215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.125266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:11.004 [2024-12-13 18:15:45.125275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:21:11.004 [2024-12-13 18:15:45.125282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.126932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.004 [2024-12-13 18:15:45.126968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:11.004 [2024-12-13 18:15:45.126977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:21:11.004 [2024-12-13 18:15:45.126985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.004 [2024-12-13 18:15:45.127016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:11.004 [2024-12-13 18:15:45.127031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:11.004 [2024-12-13 18:15:45.127219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:11.005 [2024-12-13 18:15:45.127806] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:11.005 [2024-12-13 18:15:45.127814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e17aebad-70d1-4ddb-8f87-3aaa225d11f2 00:21:11.005 [2024-12-13 18:15:45.127822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:11.005 [2024-12-13 18:15:45.127829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:11.005 [2024-12-13 18:15:45.127836] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:11.005 [2024-12-13 18:15:45.127844] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:11.005 [2024-12-13 18:15:45.127850] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:11.005 [2024-12-13 18:15:45.127858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:11.005 [2024-12-13 18:15:45.127865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:11.005 [2024-12-13 18:15:45.127871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:11.005 [2024-12-13 18:15:45.127878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:11.005 [2024-12-13 18:15:45.127884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.005 [2024-12-13 18:15:45.127892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:11.005 [2024-12-13 18:15:45.127905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:21:11.005 [2024-12-13 18:15:45.127912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.005 [2024-12-13 18:15:45.129789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.005 [2024-12-13 18:15:45.129894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.005 [2024-12-13 18:15:45.129947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:21:11.005 [2024-12-13 18:15:45.129970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.005 [2024-12-13 18:15:45.130077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.006 [2024-12-13 18:15:45.130100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.006 [2024-12-13 18:15:45.130120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:11.006 [2024-12-13 18:15:45.130192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.135873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.136001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.006 [2024-12-13 18:15:45.136054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.136076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.136146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.136167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.006 [2024-12-13 18:15:45.136187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.136205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.136280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.136305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.006 [2024-12-13 18:15:45.136326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.136384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.136416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.136603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.006 [2024-12-13 18:15:45.136627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.136646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.147276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.147427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.006 [2024-12-13 18:15:45.147478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.147499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.156176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.156357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.006 [2024-12-13 18:15:45.156412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.156443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.156520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.156544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.006 [2024-12-13 18:15:45.156564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.156583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.156621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.156642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.006 [2024-12-13 18:15:45.156662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.156730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.156813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.156823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.006 [2024-12-13 18:15:45.156832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.156839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.156869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.156878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:11.006 [2024-12-13 18:15:45.156887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.156898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.156940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.156949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.006 [2024-12-13 18:15:45.156957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.156965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.157009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.006 [2024-12-13 18:15:45.157019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.006 [2024-12-13 18:15:45.157027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.006 [2024-12-13 18:15:45.157037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.006 [2024-12-13 18:15:45.157169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.368 ms, result 0 00:21:11.577 00:21:11.577 00:21:11.577 18:15:45 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:11.577 [2024-12-13 18:15:45.935005] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:11.577 [2024-12-13 18:15:45.935145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90599 ] 00:21:11.838 [2024-12-13 18:15:46.080024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.838 [2024-12-13 18:15:46.108207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.100 [2024-12-13 18:15:46.224100] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.100 [2024-12-13 18:15:46.224200] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.100 [2024-12-13 18:15:46.384190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.384271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:12.100 [2024-12-13 18:15:46.384288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.100 [2024-12-13 18:15:46.384296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.384356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.384368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.100 [2024-12-13 18:15:46.384381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:12.100 [2024-12-13 18:15:46.384397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.384424] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:12.100 [2024-12-13 18:15:46.384783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:12.100 [2024-12-13 18:15:46.384817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.384832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.100 [2024-12-13 18:15:46.384845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:21:12.100 [2024-12-13 18:15:46.384853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.386574] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:12.100 [2024-12-13 18:15:46.390188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.390261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:12.100 [2024-12-13 18:15:46.390281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.616 ms 00:21:12.100 [2024-12-13 18:15:46.390295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.390368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.390381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:12.100 [2024-12-13 18:15:46.390395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:12.100 [2024-12-13 18:15:46.390403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.398298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.398344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.100 [2024-12-13 18:15:46.398358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.851 ms 00:21:12.100 [2024-12-13 18:15:46.398366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.398462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.398472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.100 [2024-12-13 18:15:46.398481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:12.100 [2024-12-13 18:15:46.398489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.398547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.398558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:12.100 [2024-12-13 18:15:46.398567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:12.100 [2024-12-13 18:15:46.398577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.398598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:12.100 [2024-12-13 18:15:46.400657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.400832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.100 [2024-12-13 18:15:46.400851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:21:12.100 [2024-12-13 18:15:46.400859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.400908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.400916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:12.100 [2024-12-13 18:15:46.400928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:12.100 [2024-12-13 18:15:46.400938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.400964] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:12.100 [2024-12-13 18:15:46.400986] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:12.100 [2024-12-13 18:15:46.401026] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:12.100 [2024-12-13 18:15:46.401041] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:12.100 [2024-12-13 18:15:46.401147] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:12.100 [2024-12-13 18:15:46.401158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:12.100 [2024-12-13 18:15:46.401174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:12.100 [2024-12-13 18:15:46.401185] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:12.100 [2024-12-13 18:15:46.401197] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:12.100 [2024-12-13 18:15:46.401206] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:12.100 [2024-12-13 18:15:46.401214] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:12.100 [2024-12-13 18:15:46.401225] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:12.100 [2024-12-13 18:15:46.401232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:12.100 [2024-12-13 18:15:46.401266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.401279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:12.100 [2024-12-13 18:15:46.401287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:12.100 [2024-12-13 18:15:46.401298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.100 [2024-12-13 18:15:46.401384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.100 [2024-12-13 18:15:46.401394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:12.100 [2024-12-13 18:15:46.401403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:12.100 [2024-12-13 18:15:46.401411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.101 [2024-12-13 18:15:46.401532] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:12.101 [2024-12-13 18:15:46.401551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:12.101 [2024-12-13 18:15:46.401565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.101 [2024-12-13 18:15:46.401581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:12.101 [2024-12-13 18:15:46.401613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:12.101 [2024-12-13 18:15:46.401637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:12.101 [2024-12-13 18:15:46.401652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.101 [2024-12-13 18:15:46.401681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:12.101 [2024-12-13 18:15:46.401699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:12.101 [2024-12-13 18:15:46.401720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.101 [2024-12-13 18:15:46.401732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:12.101 [2024-12-13 18:15:46.401744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:12.101 [2024-12-13 18:15:46.401761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:12.101 [2024-12-13 18:15:46.401790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:12.101 [2024-12-13 18:15:46.401805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:12.101 [2024-12-13 18:15:46.401835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.101 [2024-12-13 18:15:46.401861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:12.101 [2024-12-13 18:15:46.401874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.101 [2024-12-13 18:15:46.401902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:12.101 [2024-12-13 18:15:46.401914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.101 [2024-12-13 18:15:46.401943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:12.101 [2024-12-13 18:15:46.401955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.101 [2024-12-13 18:15:46.401977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:12.101 [2024-12-13 18:15:46.401988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:12.101 [2024-12-13 18:15:46.401999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.101 [2024-12-13 18:15:46.402011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:12.101 [2024-12-13 18:15:46.402023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:12.101 [2024-12-13 18:15:46.402039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.101 [2024-12-13 18:15:46.402050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:12.101 [2024-12-13 18:15:46.402063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:12.101 [2024-12-13 18:15:46.402074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.101 [2024-12-13 18:15:46.402086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:12.101 [2024-12-13 18:15:46.402103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:12.101 [2024-12-13 18:15:46.402114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.101 [2024-12-13 18:15:46.402129] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:12.101 [2024-12-13 18:15:46.402150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:12.101 [2024-12-13 18:15:46.402161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.101 [2024-12-13 18:15:46.402169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.101 [2024-12-13 18:15:46.402180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:12.101 [2024-12-13 18:15:46.402188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:12.101 [2024-12-13 18:15:46.402195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:12.101 [2024-12-13 18:15:46.402202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:12.101 [2024-12-13 18:15:46.402209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:12.101 [2024-12-13 18:15:46.402216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:12.101 [2024-12-13 18:15:46.402226] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:12.101 [2024-12-13 18:15:46.402237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.101 [2024-12-13 18:15:46.402261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:12.101 [2024-12-13 18:15:46.402268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:12.101 [2024-12-13 18:15:46.402276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:12.101 [2024-12-13 18:15:46.402283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:12.101 [2024-12-13 18:15:46.402294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:12.101 [2024-12-13 18:15:46.402302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:12.101 [2024-12-13 18:15:46.402310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:12.101 [2024-12-13 18:15:46.402317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:12.101 [2024-12-13 18:15:46.402324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:12.101 [2024-12-13 18:15:46.402339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:12.101 [2024-12-13 18:15:46.402347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:12.101 [2024-12-13 18:15:46.402354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:12.101 [2024-12-13 18:15:46.402361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:12.101 [2024-12-13 18:15:46.402369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:12.101 [2024-12-13 18:15:46.402377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:12.101 [2024-12-13 18:15:46.402385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.101 [2024-12-13 18:15:46.402393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:12.101 [2024-12-13 18:15:46.402402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:12.101 [2024-12-13 18:15:46.402409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:12.101 [2024-12-13 18:15:46.402416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:12.101 [2024-12-13 18:15:46.402428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.101 [2024-12-13 18:15:46.402436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:12.101 [2024-12-13 18:15:46.402444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:21:12.101 [2024-12-13 18:15:46.402454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.101 [2024-12-13 18:15:46.416134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.101 [2024-12-13 18:15:46.416345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.101 [2024-12-13 18:15:46.416365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.604 ms 00:21:12.101 [2024-12-13 18:15:46.416373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.101 [2024-12-13 18:15:46.416460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.101 [2024-12-13 18:15:46.416469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.101 [2024-12-13 18:15:46.416478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:12.101 [2024-12-13 18:15:46.416486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.101 [2024-12-13 18:15:46.438608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.101 [2024-12-13 18:15:46.438674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.101 [2024-12-13 18:15:46.438691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.053 ms 00:21:12.101 [2024-12-13 18:15:46.438709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.101 [2024-12-13 18:15:46.438768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.101 [2024-12-13 18:15:46.438782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.101 [2024-12-13 18:15:46.438795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:12.101 [2024-12-13 18:15:46.438805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.101 [2024-12-13 18:15:46.439438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.101 [2024-12-13 18:15:46.439540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.101 [2024-12-13 18:15:46.439571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:21:12.101 [2024-12-13 18:15:46.439583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.101 [2024-12-13 18:15:46.439781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.101 [2024-12-13 18:15:46.439795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.102 [2024-12-13 18:15:46.439811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:12.102 [2024-12-13 18:15:46.439822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.102 [2024-12-13 18:15:46.448208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.102 [2024-12-13 18:15:46.448286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.102 [2024-12-13 18:15:46.448299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.355 ms 00:21:12.102 [2024-12-13 18:15:46.448310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.102 [2024-12-13 18:15:46.452119] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:12.102 [2024-12-13 18:15:46.452170] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.102 [2024-12-13 18:15:46.452186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.102 [2024-12-13 18:15:46.452194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.102 [2024-12-13 18:15:46.452202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:21:12.102 [2024-12-13 18:15:46.452209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.102 [2024-12-13 18:15:46.467994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.102 [2024-12-13 18:15:46.468042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.102 [2024-12-13 18:15:46.468060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.707 ms 00:21:12.102 [2024-12-13 18:15:46.468068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.102 [2024-12-13 18:15:46.470943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.102 [2024-12-13 18:15:46.471103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.102 [2024-12-13 18:15:46.471121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:21:12.102 [2024-12-13 18:15:46.471129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.102 [2024-12-13 18:15:46.473706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.102 [2024-12-13 18:15:46.473752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.102 [2024-12-13 18:15:46.473762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:21:12.102 [2024-12-13 18:15:46.473768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.102 [2024-12-13 18:15:46.474116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.102 [2024-12-13 18:15:46.474129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.102 [2024-12-13 18:15:46.474144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:21:12.102 [2024-12-13 18:15:46.474152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.497586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.497788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.363 [2024-12-13 18:15:46.497848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.411 ms 00:21:12.363 [2024-12-13 18:15:46.497873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.505964] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:12.363 [2024-12-13 18:15:46.509051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.509190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.363 [2024-12-13 18:15:46.509279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.057 ms 00:21:12.363 [2024-12-13 18:15:46.509295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.509375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.509388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.363 [2024-12-13 18:15:46.509406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:12.363 [2024-12-13 18:15:46.509414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.509486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.509500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.363 [2024-12-13 18:15:46.509510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:12.363 [2024-12-13 18:15:46.509517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.509538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.509547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.363 [2024-12-13 18:15:46.509556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.363 [2024-12-13 18:15:46.509568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.509608] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.363 [2024-12-13 18:15:46.509619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.509628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.363 [2024-12-13 18:15:46.509639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:12.363 [2024-12-13 18:15:46.509648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.515089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.515136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.363 [2024-12-13 18:15:46.515147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.422 ms 00:21:12.363 [2024-12-13 18:15:46.515156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.515268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.363 [2024-12-13 18:15:46.515279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.363 [2024-12-13 18:15:46.515289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:12.363 [2024-12-13 18:15:46.515303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.363 [2024-12-13 18:15:46.516580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.880 ms, result 0 00:21:13.750  [2024-12-13T18:15:48.700Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-13T18:15:50.088Z] Copying: 26/1024 [MB] (12 MBps) [2024-12-13T18:15:51.032Z] Copying: 40/1024 [MB] (14 MBps) [2024-12-13T18:15:51.977Z] Copying: 53/1024 [MB] (12 MBps) [2024-12-13T18:15:52.921Z] Copying: 72/1024 [MB] (19 MBps) [2024-12-13T18:15:53.864Z] Copying: 94/1024 [MB] (22 MBps) [2024-12-13T18:15:54.809Z] Copying: 111/1024 [MB] (16 MBps) [2024-12-13T18:15:55.754Z] Copying: 125/1024 [MB] (14 MBps) [2024-12-13T18:15:56.698Z] Copying: 145/1024 [MB] (20 MBps) [2024-12-13T18:15:58.087Z] Copying: 165/1024 [MB] (19 MBps) [2024-12-13T18:15:59.030Z] Copying: 189/1024 [MB] (24 MBps) [2024-12-13T18:15:59.974Z] Copying: 208/1024 [MB] (19 MBps) [2024-12-13T18:16:00.917Z] Copying: 226/1024 [MB] (18 MBps) [2024-12-13T18:16:01.860Z] Copying: 238/1024 [MB] (11 MBps) [2024-12-13T18:16:02.802Z] Copying: 252/1024 [MB] (14 MBps) [2024-12-13T18:16:03.745Z] Copying: 264/1024 [MB] (11 MBps) [2024-12-13T18:16:05.147Z] Copying: 277/1024 [MB] (12 MBps) [2024-12-13T18:16:05.794Z] Copying: 293/1024 [MB] (15 MBps) [2024-12-13T18:16:06.739Z] Copying: 311/1024 [MB] (18 MBps) [2024-12-13T18:16:08.126Z] Copying: 322/1024 [MB] (11 MBps) [2024-12-13T18:16:08.700Z] Copying: 339/1024 [MB] (17 MBps) [2024-12-13T18:16:10.087Z] Copying: 350/1024 [MB] (10 MBps) [2024-12-13T18:16:11.033Z] Copying: 361/1024 [MB] (11 MBps) [2024-12-13T18:16:11.977Z] Copying: 381/1024 [MB] (20 MBps) [2024-12-13T18:16:12.921Z] Copying: 392/1024 [MB] (11 MBps) [2024-12-13T18:16:13.864Z] Copying: 406/1024 [MB] (14 MBps) [2024-12-13T18:16:14.806Z] Copying: 423/1024 [MB] (17 MBps) [2024-12-13T18:16:15.745Z] Copying: 437/1024 [MB] (13 MBps) [2024-12-13T18:16:17.131Z] Copying: 460/1024 [MB] (22 MBps) [2024-12-13T18:16:17.703Z] Copying: 481/1024 [MB] (21 MBps) [2024-12-13T18:16:19.090Z] Copying: 494/1024 [MB] (12 MBps) [2024-12-13T18:16:20.034Z] Copying: 512/1024 [MB] (17 MBps) [2024-12-13T18:16:20.979Z] Copying: 525/1024 [MB] (13 MBps) [2024-12-13T18:16:21.923Z] Copying: 540/1024 [MB] (14 MBps) [2024-12-13T18:16:22.866Z] Copying: 550/1024 [MB] (10 MBps) [2024-12-13T18:16:23.811Z] Copying: 572/1024 [MB] (21 MBps) [2024-12-13T18:16:24.752Z] Copying: 587/1024 [MB] (14 MBps) [2024-12-13T18:16:25.701Z] Copying: 605/1024 [MB] (18 MBps) [2024-12-13T18:16:27.085Z] Copying: 617/1024 [MB] (12 MBps) [2024-12-13T18:16:28.027Z] Copying: 631/1024 [MB] (14 MBps) [2024-12-13T18:16:29.028Z] Copying: 651/1024 [MB] (19 MBps) [2024-12-13T18:16:29.972Z] Copying: 669/1024 [MB] (17 MBps) [2024-12-13T18:16:30.917Z] Copying: 687/1024 [MB] (18 MBps) [2024-12-13T18:16:31.861Z] Copying: 702/1024 [MB] (14 MBps) [2024-12-13T18:16:32.804Z] Copying: 721/1024 [MB] (19 MBps) [2024-12-13T18:16:33.747Z] Copying: 739/1024 [MB] (18 MBps) [2024-12-13T18:16:35.134Z] Copying: 757/1024 [MB] (18 MBps) [2024-12-13T18:16:35.705Z] Copying: 772/1024 [MB] (15 MBps) [2024-12-13T18:16:37.091Z] Copying: 794/1024 [MB] (21 MBps) [2024-12-13T18:16:38.034Z] Copying: 812/1024 [MB] (18 MBps) [2024-12-13T18:16:38.975Z] Copying: 831/1024 [MB] (19 MBps) [2024-12-13T18:16:39.915Z] Copying: 849/1024 [MB] (18 MBps) [2024-12-13T18:16:40.857Z] Copying: 860/1024 [MB] (10 MBps) [2024-12-13T18:16:41.799Z] Copying: 886/1024 [MB] (25 MBps) [2024-12-13T18:16:42.742Z] Copying: 901/1024 [MB] (14 MBps) [2024-12-13T18:16:44.137Z] Copying: 914/1024 [MB] (12 MBps) [2024-12-13T18:16:44.709Z] Copying: 932/1024 [MB] (18 MBps) [2024-12-13T18:16:46.093Z] Copying: 948/1024 [MB] (15 MBps) [2024-12-13T18:16:47.037Z] Copying: 962/1024 [MB] (14 MBps) [2024-12-13T18:16:47.980Z] Copying: 980/1024 [MB] (17 MBps) [2024-12-13T18:16:48.920Z] Copying: 996/1024 [MB] (16 MBps) [2024-12-13T18:16:49.859Z] Copying: 1008/1024 [MB] (12 MBps) [2024-12-13T18:16:49.859Z] Copying: 1023/1024 [MB] (14 MBps) [2024-12-13T18:16:50.121Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-13 18:16:50.041127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.041220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:15.744 [2024-12-13 18:16:50.041268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:15.744 [2024-12-13 18:16:50.041289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.041327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:15.744 [2024-12-13 18:16:50.042351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.042404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:15.744 [2024-12-13 18:16:50.042421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:22:15.744 [2024-12-13 18:16:50.042435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.042794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.042817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:15.744 [2024-12-13 18:16:50.042837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:22:15.744 [2024-12-13 18:16:50.042853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.047076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.047099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:15.744 [2024-12-13 18:16:50.047115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.202 ms 00:22:15.744 [2024-12-13 18:16:50.047123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.053792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.053836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:15.744 [2024-12-13 18:16:50.053847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.647 ms 00:22:15.744 [2024-12-13 18:16:50.053856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.056818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.056873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:15.744 [2024-12-13 18:16:50.056884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.883 ms 00:22:15.744 [2024-12-13 18:16:50.056892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.062530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.062740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:15.744 [2024-12-13 18:16:50.062761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.591 ms 00:22:15.744 [2024-12-13 18:16:50.062770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.062989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.063018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:15.744 [2024-12-13 18:16:50.063030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:15.744 [2024-12-13 18:16:50.063043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.066395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.066448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:15.744 [2024-12-13 18:16:50.066459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:22:15.744 [2024-12-13 18:16:50.066467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.070093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.070144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:15.744 [2024-12-13 18:16:50.070155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:22:15.744 [2024-12-13 18:16:50.070163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.072703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.072746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:15.744 [2024-12-13 18:16:50.072755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:22:15.744 [2024-12-13 18:16:50.072762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.075096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.744 [2024-12-13 18:16:50.075270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:15.744 [2024-12-13 18:16:50.075694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:22:15.744 [2024-12-13 18:16:50.075747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.744 [2024-12-13 18:16:50.075856] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:15.744 [2024-12-13 18:16:50.075891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.075973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.076409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.077991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.078020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.078112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.078141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:15.744 [2024-12-13 18:16:50.078170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.078227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.078336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.078368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.078482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.078512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.078541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:15.745 [2024-12-13 18:16:50.079608] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:15.745 [2024-12-13 18:16:50.079617] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e17aebad-70d1-4ddb-8f87-3aaa225d11f2 00:22:15.745 [2024-12-13 18:16:50.079625] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:15.745 [2024-12-13 18:16:50.079632] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:15.745 [2024-12-13 18:16:50.079640] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:15.745 [2024-12-13 18:16:50.079649] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:15.745 [2024-12-13 18:16:50.079667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:15.745 [2024-12-13 18:16:50.079676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:15.745 [2024-12-13 18:16:50.079685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:15.745 [2024-12-13 18:16:50.079692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:15.745 [2024-12-13 18:16:50.079699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:15.745 [2024-12-13 18:16:50.079710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.745 [2024-12-13 18:16:50.079728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:15.745 [2024-12-13 18:16:50.079739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.856 ms 00:22:15.745 [2024-12-13 18:16:50.079751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.745 [2024-12-13 18:16:50.082356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.745 [2024-12-13 18:16:50.082516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:15.745 [2024-12-13 18:16:50.082576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:22:15.745 [2024-12-13 18:16:50.082602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.745 [2024-12-13 18:16:50.082758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.745 [2024-12-13 18:16:50.082818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:15.745 [2024-12-13 18:16:50.082897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:22:15.745 [2024-12-13 18:16:50.082922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.745 [2024-12-13 18:16:50.090571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.745 [2024-12-13 18:16:50.090735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.745 [2024-12-13 18:16:50.090791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.745 [2024-12-13 18:16:50.090815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.745 [2024-12-13 18:16:50.090909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.745 [2024-12-13 18:16:50.090931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.745 [2024-12-13 18:16:50.090999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.745 [2024-12-13 18:16:50.091022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.745 [2024-12-13 18:16:50.091112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.745 [2024-12-13 18:16:50.091139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.745 [2024-12-13 18:16:50.091159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.745 [2024-12-13 18:16:50.091182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.745 [2024-12-13 18:16:50.091216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.745 [2024-12-13 18:16:50.091239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.745 [2024-12-13 18:16:50.091277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.091297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.104938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.105100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.746 [2024-12-13 18:16:50.105163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.105185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.115234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.746 [2024-12-13 18:16:50.115264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.115273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.115338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.746 [2024-12-13 18:16:50.115347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.115355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.115400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.746 [2024-12-13 18:16:50.115415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.115423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.115509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.746 [2024-12-13 18:16:50.115518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.115525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.115565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:15.746 [2024-12-13 18:16:50.115576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.115585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.115638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.746 [2024-12-13 18:16:50.115647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.115655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.746 [2024-12-13 18:16:50.115713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.746 [2024-12-13 18:16:50.115725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.746 [2024-12-13 18:16:50.115739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.746 [2024-12-13 18:16:50.115872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.722 ms, result 0 00:22:16.006 00:22:16.006 00:22:16.006 18:16:50 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:17.972 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:17.972 18:16:51 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:17.972 [2024-12-13 18:16:51.994396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:17.972 [2024-12-13 18:16:51.994649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91283 ] 00:22:17.972 [2024-12-13 18:16:52.139765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.972 [2024-12-13 18:16:52.164149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.972 [2024-12-13 18:16:52.277042] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.972 [2024-12-13 18:16:52.277125] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:18.233 [2024-12-13 18:16:52.437690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.437753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:18.233 [2024-12-13 18:16:52.437768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:18.233 [2024-12-13 18:16:52.437777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.233 [2024-12-13 18:16:52.437837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.437849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.233 [2024-12-13 18:16:52.437859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:18.233 [2024-12-13 18:16:52.437873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.233 [2024-12-13 18:16:52.437902] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:18.233 [2024-12-13 18:16:52.438173] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:18.233 [2024-12-13 18:16:52.438194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.438206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.233 [2024-12-13 18:16:52.438217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:22:18.233 [2024-12-13 18:16:52.438226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.233 [2024-12-13 18:16:52.439941] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:18.233 [2024-12-13 18:16:52.443664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.443859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:18.233 [2024-12-13 18:16:52.443880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:22:18.233 [2024-12-13 18:16:52.443900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.233 [2024-12-13 18:16:52.444079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.444112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:18.233 [2024-12-13 18:16:52.444127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:18.233 [2024-12-13 18:16:52.444135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.233 [2024-12-13 18:16:52.452329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.452373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.233 [2024-12-13 18:16:52.452392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.146 ms 00:22:18.233 [2024-12-13 18:16:52.452401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.233 [2024-12-13 18:16:52.452497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.452508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.233 [2024-12-13 18:16:52.452517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:18.233 [2024-12-13 18:16:52.452524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.233 [2024-12-13 18:16:52.452600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.233 [2024-12-13 18:16:52.452611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:18.234 [2024-12-13 18:16:52.452619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:18.234 [2024-12-13 18:16:52.452630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.234 [2024-12-13 18:16:52.452659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:18.234 [2024-12-13 18:16:52.454885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.234 [2024-12-13 18:16:52.455051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.234 [2024-12-13 18:16:52.455069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:22:18.234 [2024-12-13 18:16:52.455077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.234 [2024-12-13 18:16:52.455120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.234 [2024-12-13 18:16:52.455129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:18.234 [2024-12-13 18:16:52.455139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:18.234 [2024-12-13 18:16:52.455150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.234 [2024-12-13 18:16:52.455175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:18.234 [2024-12-13 18:16:52.455198] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:18.234 [2024-12-13 18:16:52.455263] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:18.234 [2024-12-13 18:16:52.455281] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:18.234 [2024-12-13 18:16:52.455386] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:18.234 [2024-12-13 18:16:52.455397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:18.234 [2024-12-13 18:16:52.455414] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:18.234 [2024-12-13 18:16:52.455424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455434] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455443] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:18.234 [2024-12-13 18:16:52.455451] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:18.234 [2024-12-13 18:16:52.455461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:18.234 [2024-12-13 18:16:52.455469] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:18.234 [2024-12-13 18:16:52.455477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.234 [2024-12-13 18:16:52.455484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:18.234 [2024-12-13 18:16:52.455493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:22:18.234 [2024-12-13 18:16:52.455503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.234 [2024-12-13 18:16:52.455590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.234 [2024-12-13 18:16:52.455602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:18.234 [2024-12-13 18:16:52.455609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:18.234 [2024-12-13 18:16:52.455616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.234 [2024-12-13 18:16:52.455714] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:18.234 [2024-12-13 18:16:52.455724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:18.234 [2024-12-13 18:16:52.455732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:18.234 [2024-12-13 18:16:52.455755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:18.234 [2024-12-13 18:16:52.455776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.234 [2024-12-13 18:16:52.455792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:18.234 [2024-12-13 18:16:52.455804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:18.234 [2024-12-13 18:16:52.455811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.234 [2024-12-13 18:16:52.455818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:18.234 [2024-12-13 18:16:52.455825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:18.234 [2024-12-13 18:16:52.455832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:18.234 [2024-12-13 18:16:52.455845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:18.234 [2024-12-13 18:16:52.455865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:18.234 [2024-12-13 18:16:52.455885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:18.234 [2024-12-13 18:16:52.455905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:18.234 [2024-12-13 18:16:52.455929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.234 [2024-12-13 18:16:52.455942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:18.234 [2024-12-13 18:16:52.455949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:18.234 [2024-12-13 18:16:52.455955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.234 [2024-12-13 18:16:52.455962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:18.234 [2024-12-13 18:16:52.455968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:18.234 [2024-12-13 18:16:52.455974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.234 [2024-12-13 18:16:52.455981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:18.234 [2024-12-13 18:16:52.455987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:18.234 [2024-12-13 18:16:52.455993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.234 [2024-12-13 18:16:52.456000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:18.234 [2024-12-13 18:16:52.456007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:18.234 [2024-12-13 18:16:52.456014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.234 [2024-12-13 18:16:52.456024] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:18.234 [2024-12-13 18:16:52.456034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:18.234 [2024-12-13 18:16:52.456044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.234 [2024-12-13 18:16:52.456052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.234 [2024-12-13 18:16:52.456060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:18.234 [2024-12-13 18:16:52.456067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:18.234 [2024-12-13 18:16:52.456074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:18.234 [2024-12-13 18:16:52.456081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:18.234 [2024-12-13 18:16:52.456087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:18.234 [2024-12-13 18:16:52.456093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:18.234 [2024-12-13 18:16:52.456101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:18.234 [2024-12-13 18:16:52.456110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.234 [2024-12-13 18:16:52.456119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:18.234 [2024-12-13 18:16:52.456126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:18.234 [2024-12-13 18:16:52.456133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:18.234 [2024-12-13 18:16:52.456140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:18.234 [2024-12-13 18:16:52.456150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:18.234 [2024-12-13 18:16:52.456158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:18.234 [2024-12-13 18:16:52.456165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:18.234 [2024-12-13 18:16:52.456172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:18.234 [2024-12-13 18:16:52.456179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:18.234 [2024-12-13 18:16:52.456192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:18.234 [2024-12-13 18:16:52.456199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:18.234 [2024-12-13 18:16:52.456207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:18.234 [2024-12-13 18:16:52.456213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:18.234 [2024-12-13 18:16:52.456220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:18.234 [2024-12-13 18:16:52.456228] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:18.235 [2024-12-13 18:16:52.456236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.235 [2024-12-13 18:16:52.456260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:18.235 [2024-12-13 18:16:52.456268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:18.235 [2024-12-13 18:16:52.456275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:18.235 [2024-12-13 18:16:52.456283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:18.235 [2024-12-13 18:16:52.456293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.456301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:18.235 [2024-12-13 18:16:52.456314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:22:18.235 [2024-12-13 18:16:52.456324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.469508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.469686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.235 [2024-12-13 18:16:52.469705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.138 ms 00:22:18.235 [2024-12-13 18:16:52.469714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.469806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.469815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:18.235 [2024-12-13 18:16:52.469824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:18.235 [2024-12-13 18:16:52.469836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.490174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.490227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.235 [2024-12-13 18:16:52.490275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.279 ms 00:22:18.235 [2024-12-13 18:16:52.490284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.490332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.490344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.235 [2024-12-13 18:16:52.490353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:18.235 [2024-12-13 18:16:52.490361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.490869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.490900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.235 [2024-12-13 18:16:52.490913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:22:18.235 [2024-12-13 18:16:52.490925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.491085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.491096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.235 [2024-12-13 18:16:52.491106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:22:18.235 [2024-12-13 18:16:52.491119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.498612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.498795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.235 [2024-12-13 18:16:52.498814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.470 ms 00:22:18.235 [2024-12-13 18:16:52.498831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.502535] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:18.235 [2024-12-13 18:16:52.502585] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:18.235 [2024-12-13 18:16:52.502604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.502612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:18.235 [2024-12-13 18:16:52.502621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.669 ms 00:22:18.235 [2024-12-13 18:16:52.502628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.518715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.518760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:18.235 [2024-12-13 18:16:52.518772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.030 ms 00:22:18.235 [2024-12-13 18:16:52.518781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.521134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.521177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:18.235 [2024-12-13 18:16:52.521187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:22:18.235 [2024-12-13 18:16:52.521195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.523585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.523630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:18.235 [2024-12-13 18:16:52.523640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.346 ms 00:22:18.235 [2024-12-13 18:16:52.523647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.523997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.524018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.235 [2024-12-13 18:16:52.524028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:22:18.235 [2024-12-13 18:16:52.524036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.547870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.547925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:18.235 [2024-12-13 18:16:52.547937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.808 ms 00:22:18.235 [2024-12-13 18:16:52.547946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.555850] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:18.235 [2024-12-13 18:16:52.558911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.558953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:18.235 [2024-12-13 18:16:52.558966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.916 ms 00:22:18.235 [2024-12-13 18:16:52.558982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.559066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.559077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:18.235 [2024-12-13 18:16:52.559093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:18.235 [2024-12-13 18:16:52.559101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.559172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.559185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:18.235 [2024-12-13 18:16:52.559194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:18.235 [2024-12-13 18:16:52.559202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.559224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.559233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.235 [2024-12-13 18:16:52.559401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.235 [2024-12-13 18:16:52.559435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.559496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:18.235 [2024-12-13 18:16:52.559521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.559541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:18.235 [2024-12-13 18:16:52.559563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:18.235 [2024-12-13 18:16:52.559583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.564699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.564846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.235 [2024-12-13 18:16:52.564864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.082 ms 00:22:18.235 [2024-12-13 18:16:52.564873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.564942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.235 [2024-12-13 18:16:52.564952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.235 [2024-12-13 18:16:52.564961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:18.235 [2024-12-13 18:16:52.564978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.235 [2024-12-13 18:16:52.566084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.955 ms, result 0 00:22:19.618  [2024-12-13T18:16:54.935Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-13T18:16:55.874Z] Copying: 37/1024 [MB] (17 MBps) [2024-12-13T18:16:56.814Z] Copying: 56/1024 [MB] (19 MBps) [2024-12-13T18:16:57.754Z] Copying: 71/1024 [MB] (14 MBps) [2024-12-13T18:16:58.693Z] Copying: 88/1024 [MB] (16 MBps) [2024-12-13T18:16:59.634Z] Copying: 100/1024 [MB] (12 MBps) [2024-12-13T18:17:01.016Z] Copying: 117/1024 [MB] (17 MBps) [2024-12-13T18:17:01.586Z] Copying: 145/1024 [MB] (27 MBps) [2024-12-13T18:17:02.969Z] Copying: 163/1024 [MB] (17 MBps) [2024-12-13T18:17:03.909Z] Copying: 184/1024 [MB] (21 MBps) [2024-12-13T18:17:04.854Z] Copying: 206/1024 [MB] (22 MBps) [2024-12-13T18:17:05.804Z] Copying: 222/1024 [MB] (15 MBps) [2024-12-13T18:17:06.750Z] Copying: 233/1024 [MB] (10 MBps) [2024-12-13T18:17:07.697Z] Copying: 251/1024 [MB] (18 MBps) [2024-12-13T18:17:08.640Z] Copying: 265/1024 [MB] (14 MBps) [2024-12-13T18:17:09.585Z] Copying: 289/1024 [MB] (23 MBps) [2024-12-13T18:17:10.973Z] Copying: 324/1024 [MB] (34 MBps) [2024-12-13T18:17:11.912Z] Copying: 358/1024 [MB] (34 MBps) [2024-12-13T18:17:12.852Z] Copying: 394/1024 [MB] (35 MBps) [2024-12-13T18:17:13.794Z] Copying: 412/1024 [MB] (18 MBps) [2024-12-13T18:17:14.768Z] Copying: 434/1024 [MB] (22 MBps) [2024-12-13T18:17:15.716Z] Copying: 453/1024 [MB] (18 MBps) [2024-12-13T18:17:16.659Z] Copying: 470/1024 [MB] (17 MBps) [2024-12-13T18:17:17.647Z] Copying: 487/1024 [MB] (16 MBps) [2024-12-13T18:17:18.590Z] Copying: 506/1024 [MB] (18 MBps) [2024-12-13T18:17:19.979Z] Copying: 538/1024 [MB] (32 MBps) [2024-12-13T18:17:20.921Z] Copying: 555/1024 [MB] (16 MBps) [2024-12-13T18:17:21.863Z] Copying: 571/1024 [MB] (15 MBps) [2024-12-13T18:17:22.810Z] Copying: 588/1024 [MB] (16 MBps) [2024-12-13T18:17:23.755Z] Copying: 608/1024 [MB] (20 MBps) [2024-12-13T18:17:24.699Z] Copying: 635/1024 [MB] (27 MBps) [2024-12-13T18:17:25.650Z] Copying: 652/1024 [MB] (17 MBps) [2024-12-13T18:17:26.598Z] Copying: 668/1024 [MB] (15 MBps) [2024-12-13T18:17:27.988Z] Copying: 686/1024 [MB] (17 MBps) [2024-12-13T18:17:28.932Z] Copying: 697/1024 [MB] (11 MBps) [2024-12-13T18:17:29.876Z] Copying: 717/1024 [MB] (19 MBps) [2024-12-13T18:17:30.820Z] Copying: 731/1024 [MB] (14 MBps) [2024-12-13T18:17:31.765Z] Copying: 742/1024 [MB] (11 MBps) [2024-12-13T18:17:32.710Z] Copying: 755/1024 [MB] (13 MBps) [2024-12-13T18:17:33.655Z] Copying: 773/1024 [MB] (17 MBps) [2024-12-13T18:17:34.600Z] Copying: 787/1024 [MB] (14 MBps) [2024-12-13T18:17:35.987Z] Copying: 805/1024 [MB] (17 MBps) [2024-12-13T18:17:36.930Z] Copying: 826/1024 [MB] (20 MBps) [2024-12-13T18:17:37.604Z] Copying: 855/1024 [MB] (29 MBps) [2024-12-13T18:17:38.991Z] Copying: 872/1024 [MB] (16 MBps) [2024-12-13T18:17:39.933Z] Copying: 887/1024 [MB] (15 MBps) [2024-12-13T18:17:40.877Z] Copying: 908/1024 [MB] (20 MBps) [2024-12-13T18:17:41.820Z] Copying: 927/1024 [MB] (18 MBps) [2024-12-13T18:17:42.763Z] Copying: 941/1024 [MB] (14 MBps) [2024-12-13T18:17:43.707Z] Copying: 960/1024 [MB] (19 MBps) [2024-12-13T18:17:44.651Z] Copying: 973/1024 [MB] (12 MBps) [2024-12-13T18:17:45.601Z] Copying: 985/1024 [MB] (11 MBps) [2024-12-13T18:17:46.988Z] Copying: 999/1024 [MB] (14 MBps) [2024-12-13T18:17:47.933Z] Copying: 1020/1024 [MB] (20 MBps) [2024-12-13T18:17:47.933Z] Copying: 1048484/1048576 [kB] (3652 kBps) [2024-12-13T18:17:47.933Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-13 18:17:47.679009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.556 [2024-12-13 18:17:47.679084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:13.556 [2024-12-13 18:17:47.679100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:13.556 [2024-12-13 18:17:47.679119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.556 [2024-12-13 18:17:47.679144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:13.556 [2024-12-13 18:17:47.683885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.556 [2024-12-13 18:17:47.683955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:13.556 [2024-12-13 18:17:47.683979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:23:13.556 [2024-12-13 18:17:47.683988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.556 [2024-12-13 18:17:47.695202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.556 [2024-12-13 18:17:47.695277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:13.556 [2024-12-13 18:17:47.695290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.305 ms 00:23:13.556 [2024-12-13 18:17:47.695309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.556 [2024-12-13 18:17:47.722636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.556 [2024-12-13 18:17:47.722705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:13.556 [2024-12-13 18:17:47.722717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.307 ms 00:23:13.556 [2024-12-13 18:17:47.722726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.556 [2024-12-13 18:17:47.729120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.556 [2024-12-13 18:17:47.729176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:13.556 [2024-12-13 18:17:47.729187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.346 ms 00:23:13.556 [2024-12-13 18:17:47.729196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.556 [2024-12-13 18:17:47.732114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.556 [2024-12-13 18:17:47.732171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:13.556 [2024-12-13 18:17:47.732183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:23:13.556 [2024-12-13 18:17:47.732191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.556 [2024-12-13 18:17:47.737124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.556 [2024-12-13 18:17:47.737358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:13.556 [2024-12-13 18:17:47.737381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:23:13.556 [2024-12-13 18:17:47.737398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.819 [2024-12-13 18:17:48.026476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.819 [2024-12-13 18:17:48.026556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:13.819 [2024-12-13 18:17:48.026576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 289.028 ms 00:23:13.819 [2024-12-13 18:17:48.026585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.819 [2024-12-13 18:17:48.030070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.819 [2024-12-13 18:17:48.030126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:13.819 [2024-12-13 18:17:48.030139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.465 ms 00:23:13.819 [2024-12-13 18:17:48.030147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.819 [2024-12-13 18:17:48.033498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.819 [2024-12-13 18:17:48.033549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:13.819 [2024-12-13 18:17:48.033560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:23:13.819 [2024-12-13 18:17:48.033567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.819 [2024-12-13 18:17:48.036021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.819 [2024-12-13 18:17:48.036073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:13.819 [2024-12-13 18:17:48.036083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:23:13.819 [2024-12-13 18:17:48.036091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.819 [2024-12-13 18:17:48.038557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.819 [2024-12-13 18:17:48.038612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:13.819 [2024-12-13 18:17:48.038623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.395 ms 00:23:13.819 [2024-12-13 18:17:48.038631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.819 [2024-12-13 18:17:48.038674] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:13.819 [2024-12-13 18:17:48.038689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102400 / 261120 wr_cnt: 1 state: open 00:23:13.819 [2024-12-13 18:17:48.038710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.038995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.039002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.039009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.039017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.039032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.039040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:13.819 [2024-12-13 18:17:48.039048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:13.820 [2024-12-13 18:17:48.039525] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:13.820 [2024-12-13 18:17:48.039533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e17aebad-70d1-4ddb-8f87-3aaa225d11f2 00:23:13.820 [2024-12-13 18:17:48.039542] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102400 00:23:13.820 [2024-12-13 18:17:48.039555] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103360 00:23:13.820 [2024-12-13 18:17:48.039567] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102400 00:23:13.820 [2024-12-13 18:17:48.039585] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:23:13.820 [2024-12-13 18:17:48.039593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:13.820 [2024-12-13 18:17:48.039606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:13.820 [2024-12-13 18:17:48.039617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:13.820 [2024-12-13 18:17:48.039624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:13.820 [2024-12-13 18:17:48.039631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:13.820 [2024-12-13 18:17:48.039638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.820 [2024-12-13 18:17:48.039646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:13.820 [2024-12-13 18:17:48.039654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:23:13.820 [2024-12-13 18:17:48.039665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.042121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.820 [2024-12-13 18:17:48.042155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:13.820 [2024-12-13 18:17:48.042166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:23:13.820 [2024-12-13 18:17:48.042177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.042347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.820 [2024-12-13 18:17:48.042358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:13.820 [2024-12-13 18:17:48.042368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:23:13.820 [2024-12-13 18:17:48.042379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.050198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.820 [2024-12-13 18:17:48.050269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.820 [2024-12-13 18:17:48.050281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.820 [2024-12-13 18:17:48.050289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.050352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.820 [2024-12-13 18:17:48.050360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.820 [2024-12-13 18:17:48.050369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.820 [2024-12-13 18:17:48.050380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.050482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.820 [2024-12-13 18:17:48.050493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.820 [2024-12-13 18:17:48.050506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.820 [2024-12-13 18:17:48.050514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.050531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.820 [2024-12-13 18:17:48.050539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.820 [2024-12-13 18:17:48.050548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.820 [2024-12-13 18:17:48.050556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.064621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.820 [2024-12-13 18:17:48.064702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.820 [2024-12-13 18:17:48.064714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.820 [2024-12-13 18:17:48.064723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.820 [2024-12-13 18:17:48.075019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.820 [2024-12-13 18:17:48.075074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.821 [2024-12-13 18:17:48.075086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.821 [2024-12-13 18:17:48.075094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.821 [2024-12-13 18:17:48.075151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.821 [2024-12-13 18:17:48.075161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:13.821 [2024-12-13 18:17:48.075169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.821 [2024-12-13 18:17:48.075178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.821 [2024-12-13 18:17:48.075212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.821 [2024-12-13 18:17:48.075221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:13.821 [2024-12-13 18:17:48.075229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.821 [2024-12-13 18:17:48.075236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.821 [2024-12-13 18:17:48.075340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.821 [2024-12-13 18:17:48.075361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:13.821 [2024-12-13 18:17:48.075370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.821 [2024-12-13 18:17:48.075378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.821 [2024-12-13 18:17:48.075410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.821 [2024-12-13 18:17:48.075419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:13.821 [2024-12-13 18:17:48.075430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.821 [2024-12-13 18:17:48.075438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.821 [2024-12-13 18:17:48.075479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.821 [2024-12-13 18:17:48.075491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:13.821 [2024-12-13 18:17:48.075500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.821 [2024-12-13 18:17:48.075507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.821 [2024-12-13 18:17:48.075550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:13.821 [2024-12-13 18:17:48.075560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:13.821 [2024-12-13 18:17:48.075569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:13.821 [2024-12-13 18:17:48.075584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.821 [2024-12-13 18:17:48.075715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.672 ms, result 0 00:23:14.394 00:23:14.394 00:23:14.394 18:17:48 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:14.655 [2024-12-13 18:17:48.776363] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:14.655 [2024-12-13 18:17:48.776509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91867 ] 00:23:14.655 [2024-12-13 18:17:48.916289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.655 [2024-12-13 18:17:48.947492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.918 [2024-12-13 18:17:49.059265] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.918 [2024-12-13 18:17:49.059349] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.918 [2024-12-13 18:17:49.219860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.219924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:14.918 [2024-12-13 18:17:49.219939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:14.918 [2024-12-13 18:17:49.219952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.220014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.220025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.918 [2024-12-13 18:17:49.220036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:14.918 [2024-12-13 18:17:49.220050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.220081] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:14.918 [2024-12-13 18:17:49.220377] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:14.918 [2024-12-13 18:17:49.220395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.220404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.918 [2024-12-13 18:17:49.220418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:23:14.918 [2024-12-13 18:17:49.220426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.222190] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:14.918 [2024-12-13 18:17:49.226042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.226271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:14.918 [2024-12-13 18:17:49.226301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:23:14.918 [2024-12-13 18:17:49.226316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.226385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.226395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:14.918 [2024-12-13 18:17:49.226404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:14.918 [2024-12-13 18:17:49.226413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.234624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.234666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.918 [2024-12-13 18:17:49.234687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.165 ms 00:23:14.918 [2024-12-13 18:17:49.234694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.234792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.234802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.918 [2024-12-13 18:17:49.234810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:14.918 [2024-12-13 18:17:49.234818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.234873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.234883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:14.918 [2024-12-13 18:17:49.234892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:14.918 [2024-12-13 18:17:49.234902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.234924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:14.918 [2024-12-13 18:17:49.236923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.236959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.918 [2024-12-13 18:17:49.236972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:23:14.918 [2024-12-13 18:17:49.236979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.237016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.237025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:14.918 [2024-12-13 18:17:49.237037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:14.918 [2024-12-13 18:17:49.237051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.237072] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:14.918 [2024-12-13 18:17:49.237094] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:14.918 [2024-12-13 18:17:49.237138] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:14.918 [2024-12-13 18:17:49.237154] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:14.918 [2024-12-13 18:17:49.237278] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:14.918 [2024-12-13 18:17:49.237290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:14.918 [2024-12-13 18:17:49.237305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:14.918 [2024-12-13 18:17:49.237316] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237325] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237333] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:14.918 [2024-12-13 18:17:49.237343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:14.918 [2024-12-13 18:17:49.237350] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:14.918 [2024-12-13 18:17:49.237357] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:14.918 [2024-12-13 18:17:49.237366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.237374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:14.918 [2024-12-13 18:17:49.237382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:23:14.918 [2024-12-13 18:17:49.237389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.237474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.918 [2024-12-13 18:17:49.237482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:14.918 [2024-12-13 18:17:49.237490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:14.918 [2024-12-13 18:17:49.237497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.918 [2024-12-13 18:17:49.237593] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:14.918 [2024-12-13 18:17:49.237603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:14.918 [2024-12-13 18:17:49.237612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:14.918 [2024-12-13 18:17:49.237647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:14.918 [2024-12-13 18:17:49.237672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.918 [2024-12-13 18:17:49.237691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:14.918 [2024-12-13 18:17:49.237699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:14.918 [2024-12-13 18:17:49.237707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.918 [2024-12-13 18:17:49.237714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:14.918 [2024-12-13 18:17:49.237722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:14.918 [2024-12-13 18:17:49.237730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:14.918 [2024-12-13 18:17:49.237746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:14.918 [2024-12-13 18:17:49.237769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:14.918 [2024-12-13 18:17:49.237796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:14.918 [2024-12-13 18:17:49.237818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:14.918 [2024-12-13 18:17:49.237841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.918 [2024-12-13 18:17:49.237855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:14.918 [2024-12-13 18:17:49.237863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:14.918 [2024-12-13 18:17:49.237870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.919 [2024-12-13 18:17:49.237877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:14.919 [2024-12-13 18:17:49.237885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:14.919 [2024-12-13 18:17:49.237892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.919 [2024-12-13 18:17:49.237901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:14.919 [2024-12-13 18:17:49.237909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:14.919 [2024-12-13 18:17:49.237916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.919 [2024-12-13 18:17:49.237924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:14.919 [2024-12-13 18:17:49.237932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:14.919 [2024-12-13 18:17:49.237941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.919 [2024-12-13 18:17:49.237948] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:14.919 [2024-12-13 18:17:49.237958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:14.919 [2024-12-13 18:17:49.237966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.919 [2024-12-13 18:17:49.237973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.919 [2024-12-13 18:17:49.237981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:14.919 [2024-12-13 18:17:49.237988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:14.919 [2024-12-13 18:17:49.237994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:14.919 [2024-12-13 18:17:49.238001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:14.919 [2024-12-13 18:17:49.238008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:14.919 [2024-12-13 18:17:49.238014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:14.919 [2024-12-13 18:17:49.238025] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:14.919 [2024-12-13 18:17:49.238033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.919 [2024-12-13 18:17:49.238042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:14.919 [2024-12-13 18:17:49.238049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:14.919 [2024-12-13 18:17:49.238056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:14.919 [2024-12-13 18:17:49.238063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:14.919 [2024-12-13 18:17:49.238070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:14.919 [2024-12-13 18:17:49.238077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:14.919 [2024-12-13 18:17:49.238084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:14.919 [2024-12-13 18:17:49.238091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:14.919 [2024-12-13 18:17:49.238098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:14.919 [2024-12-13 18:17:49.238110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:14.919 [2024-12-13 18:17:49.238117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:14.919 [2024-12-13 18:17:49.238124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:14.919 [2024-12-13 18:17:49.238131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:14.919 [2024-12-13 18:17:49.238137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:14.919 [2024-12-13 18:17:49.238146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:14.919 [2024-12-13 18:17:49.238154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.919 [2024-12-13 18:17:49.238162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:14.919 [2024-12-13 18:17:49.238169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:14.919 [2024-12-13 18:17:49.238178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:14.919 [2024-12-13 18:17:49.238186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:14.919 [2024-12-13 18:17:49.238193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.238200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:14.919 [2024-12-13 18:17:49.238208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:23:14.919 [2024-12-13 18:17:49.238217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.919 [2024-12-13 18:17:49.251635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.251797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.919 [2024-12-13 18:17:49.251854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.110 ms 00:23:14.919 [2024-12-13 18:17:49.251879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.919 [2024-12-13 18:17:49.251979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.252001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:14.919 [2024-12-13 18:17:49.252021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:14.919 [2024-12-13 18:17:49.252077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.919 [2024-12-13 18:17:49.276650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.276966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.919 [2024-12-13 18:17:49.277226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.493 ms 00:23:14.919 [2024-12-13 18:17:49.277344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.919 [2024-12-13 18:17:49.277460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.277710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.919 [2024-12-13 18:17:49.277767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:14.919 [2024-12-13 18:17:49.277810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.919 [2024-12-13 18:17:49.278560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.278771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.919 [2024-12-13 18:17:49.278805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:23:14.919 [2024-12-13 18:17:49.278825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.919 [2024-12-13 18:17:49.279108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.279135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.919 [2024-12-13 18:17:49.279153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:23:14.919 [2024-12-13 18:17:49.279170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.919 [2024-12-13 18:17:49.288549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.919 [2024-12-13 18:17:49.288718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.919 [2024-12-13 18:17:49.288774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.333 ms 00:23:14.919 [2024-12-13 18:17:49.288797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.292774] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:15.181 [2024-12-13 18:17:49.292947] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:15.181 [2024-12-13 18:17:49.292972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.292980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:15.181 [2024-12-13 18:17:49.292989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.042 ms 00:23:15.181 [2024-12-13 18:17:49.292996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.309014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.309064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:15.181 [2024-12-13 18:17:49.309086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.975 ms 00:23:15.181 [2024-12-13 18:17:49.309095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.312125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.312172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:15.181 [2024-12-13 18:17:49.312183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.976 ms 00:23:15.181 [2024-12-13 18:17:49.312191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.314983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.315031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:15.181 [2024-12-13 18:17:49.315041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:23:15.181 [2024-12-13 18:17:49.315049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.315407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.315420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:15.181 [2024-12-13 18:17:49.315429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:23:15.181 [2024-12-13 18:17:49.315441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.340994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.341203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:15.181 [2024-12-13 18:17:49.341225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.523 ms 00:23:15.181 [2024-12-13 18:17:49.341265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.349443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:15.181 [2024-12-13 18:17:49.352546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.352584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:15.181 [2024-12-13 18:17:49.352596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.173 ms 00:23:15.181 [2024-12-13 18:17:49.352615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.352705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.352716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:15.181 [2024-12-13 18:17:49.352725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:15.181 [2024-12-13 18:17:49.352740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.354514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.354560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:15.181 [2024-12-13 18:17:49.354572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:23:15.181 [2024-12-13 18:17:49.354580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.354613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.354622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:15.181 [2024-12-13 18:17:49.354631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:15.181 [2024-12-13 18:17:49.354639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.354679] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:15.181 [2024-12-13 18:17:49.354693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.354701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:15.181 [2024-12-13 18:17:49.354712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:15.181 [2024-12-13 18:17:49.354720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.360008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.360070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:15.181 [2024-12-13 18:17:49.360082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:23:15.181 [2024-12-13 18:17:49.360091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.360171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.181 [2024-12-13 18:17:49.360181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:15.181 [2024-12-13 18:17:49.360190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:15.181 [2024-12-13 18:17:49.360201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.181 [2024-12-13 18:17:49.361687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.331 ms, result 0 00:23:16.568  [2024-12-13T18:17:51.889Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-13T18:17:52.833Z] Copying: 26/1024 [MB] (14 MBps) [2024-12-13T18:17:53.777Z] Copying: 38/1024 [MB] (12 MBps) [2024-12-13T18:17:54.721Z] Copying: 62/1024 [MB] (23 MBps) [2024-12-13T18:17:55.663Z] Copying: 82/1024 [MB] (19 MBps) [2024-12-13T18:17:56.607Z] Copying: 97/1024 [MB] (15 MBps) [2024-12-13T18:17:57.551Z] Copying: 113/1024 [MB] (15 MBps) [2024-12-13T18:17:58.939Z] Copying: 134/1024 [MB] (21 MBps) [2024-12-13T18:17:59.970Z] Copying: 155/1024 [MB] (20 MBps) [2024-12-13T18:18:00.564Z] Copying: 174/1024 [MB] (19 MBps) [2024-12-13T18:18:01.952Z] Copying: 191/1024 [MB] (17 MBps) [2024-12-13T18:18:02.895Z] Copying: 210/1024 [MB] (18 MBps) [2024-12-13T18:18:03.838Z] Copying: 228/1024 [MB] (18 MBps) [2024-12-13T18:18:04.782Z] Copying: 248/1024 [MB] (19 MBps) [2024-12-13T18:18:05.724Z] Copying: 266/1024 [MB] (17 MBps) [2024-12-13T18:18:06.669Z] Copying: 281/1024 [MB] (15 MBps) [2024-12-13T18:18:07.618Z] Copying: 292/1024 [MB] (10 MBps) [2024-12-13T18:18:08.565Z] Copying: 307/1024 [MB] (15 MBps) [2024-12-13T18:18:09.952Z] Copying: 324/1024 [MB] (16 MBps) [2024-12-13T18:18:10.897Z] Copying: 345/1024 [MB] (20 MBps) [2024-12-13T18:18:11.841Z] Copying: 359/1024 [MB] (13 MBps) [2024-12-13T18:18:12.784Z] Copying: 376/1024 [MB] (17 MBps) [2024-12-13T18:18:13.727Z] Copying: 393/1024 [MB] (17 MBps) [2024-12-13T18:18:14.670Z] Copying: 406/1024 [MB] (13 MBps) [2024-12-13T18:18:15.617Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-13T18:18:16.552Z] Copying: 428/1024 [MB] (11 MBps) [2024-12-13T18:18:17.940Z] Copying: 442/1024 [MB] (14 MBps) [2024-12-13T18:18:18.884Z] Copying: 453/1024 [MB] (10 MBps) [2024-12-13T18:18:19.826Z] Copying: 463/1024 [MB] (10 MBps) [2024-12-13T18:18:20.769Z] Copying: 473/1024 [MB] (10 MBps) [2024-12-13T18:18:21.710Z] Copying: 484/1024 [MB] (10 MBps) [2024-12-13T18:18:22.667Z] Copying: 495/1024 [MB] (11 MBps) [2024-12-13T18:18:23.675Z] Copying: 506/1024 [MB] (10 MBps) [2024-12-13T18:18:24.613Z] Copying: 517/1024 [MB] (11 MBps) [2024-12-13T18:18:25.552Z] Copying: 529/1024 [MB] (12 MBps) [2024-12-13T18:18:26.932Z] Copying: 543/1024 [MB] (13 MBps) [2024-12-13T18:18:27.874Z] Copying: 555/1024 [MB] (11 MBps) [2024-12-13T18:18:28.813Z] Copying: 566/1024 [MB] (11 MBps) [2024-12-13T18:18:29.751Z] Copying: 577/1024 [MB] (11 MBps) [2024-12-13T18:18:30.693Z] Copying: 588/1024 [MB] (11 MBps) [2024-12-13T18:18:31.636Z] Copying: 607/1024 [MB] (18 MBps) [2024-12-13T18:18:32.577Z] Copying: 620/1024 [MB] (12 MBps) [2024-12-13T18:18:33.963Z] Copying: 630/1024 [MB] (10 MBps) [2024-12-13T18:18:34.907Z] Copying: 644/1024 [MB] (13 MBps) [2024-12-13T18:18:35.850Z] Copying: 654/1024 [MB] (10 MBps) [2024-12-13T18:18:36.793Z] Copying: 672/1024 [MB] (17 MBps) [2024-12-13T18:18:37.736Z] Copying: 694/1024 [MB] (22 MBps) [2024-12-13T18:18:38.679Z] Copying: 704/1024 [MB] (10 MBps) [2024-12-13T18:18:39.623Z] Copying: 715/1024 [MB] (10 MBps) [2024-12-13T18:18:40.567Z] Copying: 733/1024 [MB] (17 MBps) [2024-12-13T18:18:41.952Z] Copying: 754/1024 [MB] (21 MBps) [2024-12-13T18:18:42.895Z] Copying: 778/1024 [MB] (24 MBps) [2024-12-13T18:18:43.838Z] Copying: 802/1024 [MB] (24 MBps) [2024-12-13T18:18:44.781Z] Copying: 834/1024 [MB] (31 MBps) [2024-12-13T18:18:45.723Z] Copying: 848/1024 [MB] (14 MBps) [2024-12-13T18:18:46.722Z] Copying: 869/1024 [MB] (20 MBps) [2024-12-13T18:18:47.665Z] Copying: 890/1024 [MB] (21 MBps) [2024-12-13T18:18:48.605Z] Copying: 911/1024 [MB] (20 MBps) [2024-12-13T18:18:49.547Z] Copying: 928/1024 [MB] (17 MBps) [2024-12-13T18:18:50.935Z] Copying: 944/1024 [MB] (15 MBps) [2024-12-13T18:18:51.885Z] Copying: 959/1024 [MB] (14 MBps) [2024-12-13T18:18:52.829Z] Copying: 978/1024 [MB] (19 MBps) [2024-12-13T18:18:53.772Z] Copying: 994/1024 [MB] (15 MBps) [2024-12-13T18:18:54.716Z] Copying: 1006/1024 [MB] (12 MBps) [2024-12-13T18:18:55.294Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-13 18:18:55.136173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.917 [2024-12-13 18:18:55.136276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:20.917 [2024-12-13 18:18:55.136293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:20.917 [2024-12-13 18:18:55.136302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.917 [2024-12-13 18:18:55.136326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:20.917 [2024-12-13 18:18:55.137111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.917 [2024-12-13 18:18:55.137147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:20.917 [2024-12-13 18:18:55.137166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:24:20.917 [2024-12-13 18:18:55.137174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.917 [2024-12-13 18:18:55.137433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.917 [2024-12-13 18:18:55.137444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:20.917 [2024-12-13 18:18:55.137453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:24:20.917 [2024-12-13 18:18:55.137462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.917 [2024-12-13 18:18:55.143993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.917 [2024-12-13 18:18:55.144287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:20.917 [2024-12-13 18:18:55.144313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.510 ms 00:24:20.917 [2024-12-13 18:18:55.144322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.917 [2024-12-13 18:18:55.150518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.917 [2024-12-13 18:18:55.150565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:20.917 [2024-12-13 18:18:55.150577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.141 ms 00:24:20.917 [2024-12-13 18:18:55.150585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.917 [2024-12-13 18:18:55.153460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.917 [2024-12-13 18:18:55.153510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:20.917 [2024-12-13 18:18:55.153521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:24:20.917 [2024-12-13 18:18:55.153529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.917 [2024-12-13 18:18:55.158477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.917 [2024-12-13 18:18:55.158527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:20.917 [2024-12-13 18:18:55.158538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.903 ms 00:24:20.917 [2024-12-13 18:18:55.158554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.178 [2024-12-13 18:18:55.334236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.178 [2024-12-13 18:18:55.334335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:21.178 [2024-12-13 18:18:55.334352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 175.630 ms 00:24:21.178 [2024-12-13 18:18:55.334361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.178 [2024-12-13 18:18:55.337119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.178 [2024-12-13 18:18:55.337173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:21.178 [2024-12-13 18:18:55.337185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:24:21.178 [2024-12-13 18:18:55.337192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.178 [2024-12-13 18:18:55.339208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.178 [2024-12-13 18:18:55.339276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:21.178 [2024-12-13 18:18:55.339287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.968 ms 00:24:21.178 [2024-12-13 18:18:55.339294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.178 [2024-12-13 18:18:55.340826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.178 [2024-12-13 18:18:55.340879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:21.178 [2024-12-13 18:18:55.340891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:24:21.178 [2024-12-13 18:18:55.340900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.178 [2024-12-13 18:18:55.342766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.178 [2024-12-13 18:18:55.342991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:21.178 [2024-12-13 18:18:55.343013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:24:21.178 [2024-12-13 18:18:55.343023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.178 [2024-12-13 18:18:55.343065] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:21.178 [2024-12-13 18:18:55.343082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:21.178 [2024-12-13 18:18:55.343093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:21.178 [2024-12-13 18:18:55.343102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:21.178 [2024-12-13 18:18:55.343111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:21.178 [2024-12-13 18:18:55.343120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:21.178 [2024-12-13 18:18:55.343127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:21.179 [2024-12-13 18:18:55.343896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:21.180 [2024-12-13 18:18:55.343904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:21.180 [2024-12-13 18:18:55.343912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:21.180 [2024-12-13 18:18:55.343929] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:21.180 [2024-12-13 18:18:55.343943] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e17aebad-70d1-4ddb-8f87-3aaa225d11f2 00:24:21.180 [2024-12-13 18:18:55.343951] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:21.180 [2024-12-13 18:18:55.343966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 29632 00:24:21.180 [2024-12-13 18:18:55.343979] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 28672 00:24:21.180 [2024-12-13 18:18:55.343988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0335 00:24:21.180 [2024-12-13 18:18:55.343995] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:21.180 [2024-12-13 18:18:55.344003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:21.180 [2024-12-13 18:18:55.344011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:21.180 [2024-12-13 18:18:55.344018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:21.180 [2024-12-13 18:18:55.344025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:21.180 [2024-12-13 18:18:55.344032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.180 [2024-12-13 18:18:55.344040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:21.180 [2024-12-13 18:18:55.344050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:24:21.180 [2024-12-13 18:18:55.344058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.346661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.180 [2024-12-13 18:18:55.346696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:21.180 [2024-12-13 18:18:55.346712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:24:21.180 [2024-12-13 18:18:55.346722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.346851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.180 [2024-12-13 18:18:55.346860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:21.180 [2024-12-13 18:18:55.346869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:24:21.180 [2024-12-13 18:18:55.346877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.354636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.354691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:21.180 [2024-12-13 18:18:55.354701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.354709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.354769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.354778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:21.180 [2024-12-13 18:18:55.354786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.354795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.354862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.354872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:21.180 [2024-12-13 18:18:55.354885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.354893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.354909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.354918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:21.180 [2024-12-13 18:18:55.354927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.354935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.368491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.368544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:21.180 [2024-12-13 18:18:55.368564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.368573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.378584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.378633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:21.180 [2024-12-13 18:18:55.378651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.378660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.378716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.378726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:21.180 [2024-12-13 18:18:55.378734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.378743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.378780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.378790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:21.180 [2024-12-13 18:18:55.378798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.378806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.378872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.378884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:21.180 [2024-12-13 18:18:55.378893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.378900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.378934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.378944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:21.180 [2024-12-13 18:18:55.378953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.378960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.378999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.379011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:21.180 [2024-12-13 18:18:55.379020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.379029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.379073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.180 [2024-12-13 18:18:55.379084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:21.180 [2024-12-13 18:18:55.379092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.180 [2024-12-13 18:18:55.379105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.180 [2024-12-13 18:18:55.379271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 243.032 ms, result 0 00:24:21.441 00:24:21.441 00:24:21.441 18:18:55 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:23.987 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:23.987 Process with pid 89804 is not found 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 89804 00:24:23.987 18:18:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89804 ']' 00:24:23.987 18:18:57 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89804 00:24:23.987 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89804) - No such process 00:24:23.987 18:18:57 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 89804 is not found' 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:23.987 Remove shared memory files 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:23.987 18:18:57 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:23.987 ************************************ 00:24:23.987 END TEST ftl_restore 00:24:23.987 ************************************ 00:24:23.987 00:24:23.987 real 4m27.569s 00:24:23.987 user 4m14.429s 00:24:23.987 sys 0m12.684s 00:24:23.987 18:18:57 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.987 18:18:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:23.987 18:18:57 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:23.987 18:18:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:23.987 18:18:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.987 18:18:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:23.987 ************************************ 00:24:23.987 START TEST ftl_dirty_shutdown 00:24:23.988 ************************************ 00:24:23.988 18:18:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:23.988 * Looking for test storage... 00:24:23.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.988 --rc genhtml_branch_coverage=1 00:24:23.988 --rc genhtml_function_coverage=1 00:24:23.988 --rc genhtml_legend=1 00:24:23.988 --rc geninfo_all_blocks=1 00:24:23.988 --rc geninfo_unexecuted_blocks=1 00:24:23.988 00:24:23.988 ' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.988 --rc genhtml_branch_coverage=1 00:24:23.988 --rc genhtml_function_coverage=1 00:24:23.988 --rc genhtml_legend=1 00:24:23.988 --rc geninfo_all_blocks=1 00:24:23.988 --rc geninfo_unexecuted_blocks=1 00:24:23.988 00:24:23.988 ' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.988 --rc genhtml_branch_coverage=1 00:24:23.988 --rc genhtml_function_coverage=1 00:24:23.988 --rc genhtml_legend=1 00:24:23.988 --rc geninfo_all_blocks=1 00:24:23.988 --rc geninfo_unexecuted_blocks=1 00:24:23.988 00:24:23.988 ' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.988 --rc genhtml_branch_coverage=1 00:24:23.988 --rc genhtml_function_coverage=1 00:24:23.988 --rc genhtml_legend=1 00:24:23.988 --rc geninfo_all_blocks=1 00:24:23.988 --rc geninfo_unexecuted_blocks=1 00:24:23.988 00:24:23.988 ' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92642 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92642 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92642 ']' 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.988 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:23.988 [2024-12-13 18:18:58.181919] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:23.988 [2024-12-13 18:18:58.182340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92642 ] 00:24:23.988 [2024-12-13 18:18:58.325608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.988 [2024-12-13 18:18:58.354820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:24.932 18:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:24.932 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:25.193 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:25.193 { 00:24:25.193 "name": "nvme0n1", 00:24:25.193 "aliases": [ 00:24:25.193 "25bfeb84-3bd8-4ca0-83ef-ddb01489fc11" 00:24:25.193 ], 00:24:25.193 "product_name": "NVMe disk", 00:24:25.193 "block_size": 4096, 00:24:25.193 "num_blocks": 1310720, 00:24:25.193 "uuid": "25bfeb84-3bd8-4ca0-83ef-ddb01489fc11", 00:24:25.193 "numa_id": -1, 00:24:25.193 "assigned_rate_limits": { 00:24:25.193 "rw_ios_per_sec": 0, 00:24:25.193 "rw_mbytes_per_sec": 0, 00:24:25.193 "r_mbytes_per_sec": 0, 00:24:25.193 "w_mbytes_per_sec": 0 00:24:25.193 }, 00:24:25.193 "claimed": true, 00:24:25.193 "claim_type": "read_many_write_one", 00:24:25.193 "zoned": false, 00:24:25.193 "supported_io_types": { 00:24:25.193 "read": true, 00:24:25.193 "write": true, 00:24:25.193 "unmap": true, 00:24:25.193 "flush": true, 00:24:25.193 "reset": true, 00:24:25.193 "nvme_admin": true, 00:24:25.193 "nvme_io": true, 00:24:25.193 "nvme_io_md": false, 00:24:25.193 "write_zeroes": true, 00:24:25.193 "zcopy": false, 00:24:25.193 "get_zone_info": false, 00:24:25.193 "zone_management": false, 00:24:25.193 "zone_append": false, 00:24:25.193 "compare": true, 00:24:25.193 "compare_and_write": false, 00:24:25.193 "abort": true, 00:24:25.193 "seek_hole": false, 00:24:25.193 "seek_data": false, 00:24:25.193 "copy": true, 00:24:25.193 "nvme_iov_md": false 00:24:25.193 }, 00:24:25.193 "driver_specific": { 00:24:25.193 "nvme": [ 00:24:25.193 { 00:24:25.193 "pci_address": "0000:00:11.0", 00:24:25.193 "trid": { 00:24:25.193 "trtype": "PCIe", 00:24:25.194 "traddr": "0000:00:11.0" 00:24:25.194 }, 00:24:25.194 "ctrlr_data": { 00:24:25.194 "cntlid": 0, 00:24:25.194 "vendor_id": "0x1b36", 00:24:25.194 "model_number": "QEMU NVMe Ctrl", 00:24:25.194 "serial_number": "12341", 00:24:25.194 "firmware_revision": "8.0.0", 00:24:25.194 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:25.194 "oacs": { 00:24:25.194 "security": 0, 00:24:25.194 "format": 1, 00:24:25.194 "firmware": 0, 00:24:25.194 "ns_manage": 1 00:24:25.194 }, 00:24:25.194 "multi_ctrlr": false, 00:24:25.194 "ana_reporting": false 00:24:25.194 }, 00:24:25.194 "vs": { 00:24:25.194 "nvme_version": "1.4" 00:24:25.194 }, 00:24:25.194 "ns_data": { 00:24:25.194 "id": 1, 00:24:25.194 "can_share": false 00:24:25.194 } 00:24:25.194 } 00:24:25.194 ], 00:24:25.194 "mp_policy": "active_passive" 00:24:25.194 } 00:24:25.194 } 00:24:25.194 ]' 00:24:25.194 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:25.194 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:25.194 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=e825008a-9cec-4800-8c89-7b5872d92337 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:25.455 18:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e825008a-9cec-4800-8c89-7b5872d92337 00:24:25.716 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:25.978 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a7d8ea1c-8ab8-4bef-8e96-8ca0e2864b09 00:24:25.978 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a7d8ea1c-8ab8-4bef-8e96-8ca0e2864b09 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:26.239 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:26.500 { 00:24:26.500 "name": "3aaf6a9c-5695-4179-8829-6c55de9a7693", 00:24:26.500 "aliases": [ 00:24:26.500 "lvs/nvme0n1p0" 00:24:26.500 ], 00:24:26.500 "product_name": "Logical Volume", 00:24:26.500 "block_size": 4096, 00:24:26.500 "num_blocks": 26476544, 00:24:26.500 "uuid": "3aaf6a9c-5695-4179-8829-6c55de9a7693", 00:24:26.500 "assigned_rate_limits": { 00:24:26.500 "rw_ios_per_sec": 0, 00:24:26.500 "rw_mbytes_per_sec": 0, 00:24:26.500 "r_mbytes_per_sec": 0, 00:24:26.500 "w_mbytes_per_sec": 0 00:24:26.500 }, 00:24:26.500 "claimed": false, 00:24:26.500 "zoned": false, 00:24:26.500 "supported_io_types": { 00:24:26.500 "read": true, 00:24:26.500 "write": true, 00:24:26.500 "unmap": true, 00:24:26.500 "flush": false, 00:24:26.500 "reset": true, 00:24:26.500 "nvme_admin": false, 00:24:26.500 "nvme_io": false, 00:24:26.500 "nvme_io_md": false, 00:24:26.500 "write_zeroes": true, 00:24:26.500 "zcopy": false, 00:24:26.500 "get_zone_info": false, 00:24:26.500 "zone_management": false, 00:24:26.500 "zone_append": false, 00:24:26.500 "compare": false, 00:24:26.500 "compare_and_write": false, 00:24:26.500 "abort": false, 00:24:26.500 "seek_hole": true, 00:24:26.500 "seek_data": true, 00:24:26.500 "copy": false, 00:24:26.500 "nvme_iov_md": false 00:24:26.500 }, 00:24:26.500 "driver_specific": { 00:24:26.500 "lvol": { 00:24:26.500 "lvol_store_uuid": "a7d8ea1c-8ab8-4bef-8e96-8ca0e2864b09", 00:24:26.500 "base_bdev": "nvme0n1", 00:24:26.500 "thin_provision": true, 00:24:26.500 "num_allocated_clusters": 0, 00:24:26.500 "snapshot": false, 00:24:26.500 "clone": false, 00:24:26.500 "esnap_clone": false 00:24:26.500 } 00:24:26.500 } 00:24:26.500 } 00:24:26.500 ]' 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:26.500 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:26.760 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:26.760 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:26.760 18:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.760 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:26.760 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:26.760 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:26.760 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:26.761 18:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:27.022 { 00:24:27.022 "name": "3aaf6a9c-5695-4179-8829-6c55de9a7693", 00:24:27.022 "aliases": [ 00:24:27.022 "lvs/nvme0n1p0" 00:24:27.022 ], 00:24:27.022 "product_name": "Logical Volume", 00:24:27.022 "block_size": 4096, 00:24:27.022 "num_blocks": 26476544, 00:24:27.022 "uuid": "3aaf6a9c-5695-4179-8829-6c55de9a7693", 00:24:27.022 "assigned_rate_limits": { 00:24:27.022 "rw_ios_per_sec": 0, 00:24:27.022 "rw_mbytes_per_sec": 0, 00:24:27.022 "r_mbytes_per_sec": 0, 00:24:27.022 "w_mbytes_per_sec": 0 00:24:27.022 }, 00:24:27.022 "claimed": false, 00:24:27.022 "zoned": false, 00:24:27.022 "supported_io_types": { 00:24:27.022 "read": true, 00:24:27.022 "write": true, 00:24:27.022 "unmap": true, 00:24:27.022 "flush": false, 00:24:27.022 "reset": true, 00:24:27.022 "nvme_admin": false, 00:24:27.022 "nvme_io": false, 00:24:27.022 "nvme_io_md": false, 00:24:27.022 "write_zeroes": true, 00:24:27.022 "zcopy": false, 00:24:27.022 "get_zone_info": false, 00:24:27.022 "zone_management": false, 00:24:27.022 "zone_append": false, 00:24:27.022 "compare": false, 00:24:27.022 "compare_and_write": false, 00:24:27.022 "abort": false, 00:24:27.022 "seek_hole": true, 00:24:27.022 "seek_data": true, 00:24:27.022 "copy": false, 00:24:27.022 "nvme_iov_md": false 00:24:27.022 }, 00:24:27.022 "driver_specific": { 00:24:27.022 "lvol": { 00:24:27.022 "lvol_store_uuid": "a7d8ea1c-8ab8-4bef-8e96-8ca0e2864b09", 00:24:27.022 "base_bdev": "nvme0n1", 00:24:27.022 "thin_provision": true, 00:24:27.022 "num_allocated_clusters": 0, 00:24:27.022 "snapshot": false, 00:24:27.022 "clone": false, 00:24:27.022 "esnap_clone": false 00:24:27.022 } 00:24:27.022 } 00:24:27.022 } 00:24:27.022 ]' 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:27.022 18:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3aaf6a9c-5695-4179-8829-6c55de9a7693 00:24:27.283 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:27.283 { 00:24:27.283 "name": "3aaf6a9c-5695-4179-8829-6c55de9a7693", 00:24:27.283 "aliases": [ 00:24:27.283 "lvs/nvme0n1p0" 00:24:27.283 ], 00:24:27.283 "product_name": "Logical Volume", 00:24:27.283 "block_size": 4096, 00:24:27.283 "num_blocks": 26476544, 00:24:27.283 "uuid": "3aaf6a9c-5695-4179-8829-6c55de9a7693", 00:24:27.283 "assigned_rate_limits": { 00:24:27.283 "rw_ios_per_sec": 0, 00:24:27.283 "rw_mbytes_per_sec": 0, 00:24:27.283 "r_mbytes_per_sec": 0, 00:24:27.283 "w_mbytes_per_sec": 0 00:24:27.283 }, 00:24:27.283 "claimed": false, 00:24:27.283 "zoned": false, 00:24:27.283 "supported_io_types": { 00:24:27.283 "read": true, 00:24:27.283 "write": true, 00:24:27.283 "unmap": true, 00:24:27.283 "flush": false, 00:24:27.283 "reset": true, 00:24:27.283 "nvme_admin": false, 00:24:27.283 "nvme_io": false, 00:24:27.283 "nvme_io_md": false, 00:24:27.284 "write_zeroes": true, 00:24:27.284 "zcopy": false, 00:24:27.284 "get_zone_info": false, 00:24:27.284 "zone_management": false, 00:24:27.284 "zone_append": false, 00:24:27.284 "compare": false, 00:24:27.284 "compare_and_write": false, 00:24:27.284 "abort": false, 00:24:27.284 "seek_hole": true, 00:24:27.284 "seek_data": true, 00:24:27.284 "copy": false, 00:24:27.284 "nvme_iov_md": false 00:24:27.284 }, 00:24:27.284 "driver_specific": { 00:24:27.284 "lvol": { 00:24:27.284 "lvol_store_uuid": "a7d8ea1c-8ab8-4bef-8e96-8ca0e2864b09", 00:24:27.284 "base_bdev": "nvme0n1", 00:24:27.284 "thin_provision": true, 00:24:27.284 "num_allocated_clusters": 0, 00:24:27.284 "snapshot": false, 00:24:27.284 "clone": false, 00:24:27.284 "esnap_clone": false 00:24:27.284 } 00:24:27.284 } 00:24:27.284 } 00:24:27.284 ]' 00:24:27.284 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:27.284 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:27.284 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3aaf6a9c-5695-4179-8829-6c55de9a7693 --l2p_dram_limit 10' 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:27.546 18:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3aaf6a9c-5695-4179-8829-6c55de9a7693 --l2p_dram_limit 10 -c nvc0n1p0 00:24:27.546 [2024-12-13 18:19:01.864792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.864912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:27.546 [2024-12-13 18:19:01.864959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:27.546 [2024-12-13 18:19:01.864981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.865040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.865061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.546 [2024-12-13 18:19:01.865078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:27.546 [2024-12-13 18:19:01.865097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.865122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:27.546 [2024-12-13 18:19:01.865385] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:27.546 [2024-12-13 18:19:01.865422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.865439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.546 [2024-12-13 18:19:01.865456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:24:27.546 [2024-12-13 18:19:01.865471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.865532] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7551228f-b775-4f19-b32c-c5319322e129 00:24:27.546 [2024-12-13 18:19:01.866481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.866558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:27.546 [2024-12-13 18:19:01.866604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:27.546 [2024-12-13 18:19:01.866622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.871311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.871398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.546 [2024-12-13 18:19:01.871412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:24:27.546 [2024-12-13 18:19:01.871419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.871481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.871488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.546 [2024-12-13 18:19:01.871496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:27.546 [2024-12-13 18:19:01.871502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.871544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.871554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:27.546 [2024-12-13 18:19:01.871561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:27.546 [2024-12-13 18:19:01.871567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.871585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.546 [2024-12-13 18:19:01.872847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.872874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.546 [2024-12-13 18:19:01.872881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:24:27.546 [2024-12-13 18:19:01.872888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.872915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.872926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:27.546 [2024-12-13 18:19:01.872932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:27.546 [2024-12-13 18:19:01.872941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.872955] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:27.546 [2024-12-13 18:19:01.873070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:27.546 [2024-12-13 18:19:01.873079] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:27.546 [2024-12-13 18:19:01.873088] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:27.546 [2024-12-13 18:19:01.873096] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873107] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:27.546 [2024-12-13 18:19:01.873121] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:27.546 [2024-12-13 18:19:01.873126] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:27.546 [2024-12-13 18:19:01.873133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:27.546 [2024-12-13 18:19:01.873139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.873146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:27.546 [2024-12-13 18:19:01.873151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:24:27.546 [2024-12-13 18:19:01.873158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.873228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.546 [2024-12-13 18:19:01.873236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:27.546 [2024-12-13 18:19:01.873254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:27.546 [2024-12-13 18:19:01.873266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.546 [2024-12-13 18:19:01.873342] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:27.546 [2024-12-13 18:19:01.873351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:27.546 [2024-12-13 18:19:01.873357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:27.546 [2024-12-13 18:19:01.873377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:27.546 [2024-12-13 18:19:01.873394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.546 [2024-12-13 18:19:01.873405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:27.546 [2024-12-13 18:19:01.873411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:27.546 [2024-12-13 18:19:01.873416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.546 [2024-12-13 18:19:01.873424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:27.546 [2024-12-13 18:19:01.873431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:27.546 [2024-12-13 18:19:01.873437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:27.546 [2024-12-13 18:19:01.873449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:27.546 [2024-12-13 18:19:01.873465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:27.546 [2024-12-13 18:19:01.873488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:27.546 [2024-12-13 18:19:01.873507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:27.546 [2024-12-13 18:19:01.873528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.546 [2024-12-13 18:19:01.873541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:27.546 [2024-12-13 18:19:01.873546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:27.546 [2024-12-13 18:19:01.873553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.547 [2024-12-13 18:19:01.873559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:27.547 [2024-12-13 18:19:01.873567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:27.547 [2024-12-13 18:19:01.873573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.547 [2024-12-13 18:19:01.873580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:27.547 [2024-12-13 18:19:01.873586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:27.547 [2024-12-13 18:19:01.873592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.547 [2024-12-13 18:19:01.873598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:27.547 [2024-12-13 18:19:01.873605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:27.547 [2024-12-13 18:19:01.873610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.547 [2024-12-13 18:19:01.873617] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:27.547 [2024-12-13 18:19:01.873630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:27.547 [2024-12-13 18:19:01.873642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.547 [2024-12-13 18:19:01.873650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.547 [2024-12-13 18:19:01.873658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:27.547 [2024-12-13 18:19:01.873663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:27.547 [2024-12-13 18:19:01.873669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:27.547 [2024-12-13 18:19:01.873674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:27.547 [2024-12-13 18:19:01.873680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:27.547 [2024-12-13 18:19:01.873685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:27.547 [2024-12-13 18:19:01.873692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:27.547 [2024-12-13 18:19:01.873700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.547 [2024-12-13 18:19:01.873708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:27.547 [2024-12-13 18:19:01.873713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:27.547 [2024-12-13 18:19:01.873721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:27.547 [2024-12-13 18:19:01.873726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:27.547 [2024-12-13 18:19:01.873732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:27.547 [2024-12-13 18:19:01.873737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:27.547 [2024-12-13 18:19:01.873745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:27.547 [2024-12-13 18:19:01.873750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:27.547 [2024-12-13 18:19:01.873756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:27.547 [2024-12-13 18:19:01.873761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:27.547 [2024-12-13 18:19:01.873768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:27.547 [2024-12-13 18:19:01.873774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:27.547 [2024-12-13 18:19:01.873780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:27.547 [2024-12-13 18:19:01.873786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:27.547 [2024-12-13 18:19:01.873793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:27.547 [2024-12-13 18:19:01.873799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.547 [2024-12-13 18:19:01.873809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:27.547 [2024-12-13 18:19:01.873814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:27.547 [2024-12-13 18:19:01.873820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:27.547 [2024-12-13 18:19:01.873826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:27.547 [2024-12-13 18:19:01.873833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.547 [2024-12-13 18:19:01.873838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:27.547 [2024-12-13 18:19:01.873847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:24:27.547 [2024-12-13 18:19:01.873852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.547 [2024-12-13 18:19:01.873880] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:27.547 [2024-12-13 18:19:01.873888] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:30.847 [2024-12-13 18:19:04.477659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.847 [2024-12-13 18:19:04.477728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:30.847 [2024-12-13 18:19:04.477746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2603.763 ms 00:24:30.847 [2024-12-13 18:19:04.477755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.847 [2024-12-13 18:19:04.487187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.847 [2024-12-13 18:19:04.487232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.847 [2024-12-13 18:19:04.487261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.345 ms 00:24:30.847 [2024-12-13 18:19:04.487270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.847 [2024-12-13 18:19:04.487380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.847 [2024-12-13 18:19:04.487391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:30.847 [2024-12-13 18:19:04.487401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:30.848 [2024-12-13 18:19:04.487409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.497051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.497092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.848 [2024-12-13 18:19:04.497104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.593 ms 00:24:30.848 [2024-12-13 18:19:04.497115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.497144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.497157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.848 [2024-12-13 18:19:04.497168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:30.848 [2024-12-13 18:19:04.497175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.497585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.497609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.848 [2024-12-13 18:19:04.497621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:24:30.848 [2024-12-13 18:19:04.497629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.497750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.497765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.848 [2024-12-13 18:19:04.497776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:24:30.848 [2024-12-13 18:19:04.497785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.504220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.504283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.848 [2024-12-13 18:19:04.504296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.398 ms 00:24:30.848 [2024-12-13 18:19:04.504304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.524908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:30.848 [2024-12-13 18:19:04.528808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.528860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:30.848 [2024-12-13 18:19:04.528876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.445 ms 00:24:30.848 [2024-12-13 18:19:04.528890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.610398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.610467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:30.848 [2024-12-13 18:19:04.610485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.459 ms 00:24:30.848 [2024-12-13 18:19:04.610500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.610713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.610727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:30.848 [2024-12-13 18:19:04.610742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:24:30.848 [2024-12-13 18:19:04.610756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.616021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.616230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:30.848 [2024-12-13 18:19:04.616269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.214 ms 00:24:30.848 [2024-12-13 18:19:04.616280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.620622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.620677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:30.848 [2024-12-13 18:19:04.620688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:24:30.848 [2024-12-13 18:19:04.620698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.621048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.621061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:30.848 [2024-12-13 18:19:04.621076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:24:30.848 [2024-12-13 18:19:04.621088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.665520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.665720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:30.848 [2024-12-13 18:19:04.665744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.394 ms 00:24:30.848 [2024-12-13 18:19:04.665755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.672807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.672982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:30.848 [2024-12-13 18:19:04.673001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.982 ms 00:24:30.848 [2024-12-13 18:19:04.673012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.679052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.679111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:30.848 [2024-12-13 18:19:04.679122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:24:30.848 [2024-12-13 18:19:04.679131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.685413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.685471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:30.848 [2024-12-13 18:19:04.685482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.236 ms 00:24:30.848 [2024-12-13 18:19:04.685494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.685543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.685556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:30.848 [2024-12-13 18:19:04.685564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:30.848 [2024-12-13 18:19:04.685575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.685646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.848 [2024-12-13 18:19:04.685658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:30.848 [2024-12-13 18:19:04.685667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:30.848 [2024-12-13 18:19:04.685680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.848 [2024-12-13 18:19:04.686837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2821.552 ms, result 0 00:24:30.848 { 00:24:30.848 "name": "ftl0", 00:24:30.848 "uuid": "7551228f-b775-4f19-b32c-c5319322e129" 00:24:30.848 } 00:24:30.848 18:19:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:30.848 18:19:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:30.848 18:19:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:30.848 18:19:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:30.848 18:19:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:30.848 /dev/nbd0 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:30.848 1+0 records in 00:24:30.848 1+0 records out 00:24:30.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284339 s, 14.4 MB/s 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:30.848 18:19:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:31.108 [2024-12-13 18:19:05.232512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:31.108 [2024-12-13 18:19:05.232867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92773 ] 00:24:31.108 [2024-12-13 18:19:05.380007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.108 [2024-12-13 18:19:05.409371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.495  [2024-12-13T18:19:07.815Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-13T18:19:08.759Z] Copying: 383/1024 [MB] (193 MBps) [2024-12-13T18:19:09.753Z] Copying: 646/1024 [MB] (262 MBps) [2024-12-13T18:19:10.013Z] Copying: 902/1024 [MB] (256 MBps) [2024-12-13T18:19:10.274Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:24:35.897 00:24:35.897 18:19:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:38.444 18:19:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:38.444 [2024-12-13 18:19:12.260964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:38.444 [2024-12-13 18:19:12.261286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92850 ] 00:24:38.444 [2024-12-13 18:19:12.407308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.444 [2024-12-13 18:19:12.425375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.387  [2024-12-13T18:19:14.707Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-13T18:19:15.648Z] Copying: 38/1024 [MB] (22 MBps) [2024-12-13T18:19:16.591Z] Copying: 62/1024 [MB] (24 MBps) [2024-12-13T18:19:17.536Z] Copying: 84/1024 [MB] (21 MBps) [2024-12-13T18:19:18.479Z] Copying: 100/1024 [MB] (16 MBps) [2024-12-13T18:19:19.865Z] Copying: 117/1024 [MB] (17 MBps) [2024-12-13T18:19:20.808Z] Copying: 132/1024 [MB] (14 MBps) [2024-12-13T18:19:21.750Z] Copying: 150/1024 [MB] (17 MBps) [2024-12-13T18:19:22.694Z] Copying: 175/1024 [MB] (24 MBps) [2024-12-13T18:19:23.637Z] Copying: 194/1024 [MB] (19 MBps) [2024-12-13T18:19:24.581Z] Copying: 210/1024 [MB] (16 MBps) [2024-12-13T18:19:25.522Z] Copying: 228/1024 [MB] (17 MBps) [2024-12-13T18:19:26.908Z] Copying: 245/1024 [MB] (17 MBps) [2024-12-13T18:19:27.481Z] Copying: 265/1024 [MB] (19 MBps) [2024-12-13T18:19:28.868Z] Copying: 291/1024 [MB] (25 MBps) [2024-12-13T18:19:29.811Z] Copying: 310/1024 [MB] (19 MBps) [2024-12-13T18:19:30.754Z] Copying: 332/1024 [MB] (21 MBps) [2024-12-13T18:19:31.698Z] Copying: 348/1024 [MB] (16 MBps) [2024-12-13T18:19:32.642Z] Copying: 362/1024 [MB] (13 MBps) [2024-12-13T18:19:33.634Z] Copying: 378/1024 [MB] (16 MBps) [2024-12-13T18:19:34.567Z] Copying: 391/1024 [MB] (12 MBps) [2024-12-13T18:19:35.500Z] Copying: 423/1024 [MB] (32 MBps) [2024-12-13T18:19:36.872Z] Copying: 441/1024 [MB] (18 MBps) [2024-12-13T18:19:37.805Z] Copying: 463/1024 [MB] (21 MBps) [2024-12-13T18:19:38.737Z] Copying: 483/1024 [MB] (19 MBps) [2024-12-13T18:19:39.670Z] Copying: 500/1024 [MB] (17 MBps) [2024-12-13T18:19:40.603Z] Copying: 519/1024 [MB] (18 MBps) [2024-12-13T18:19:41.537Z] Copying: 535/1024 [MB] (16 MBps) [2024-12-13T18:19:42.912Z] Copying: 555/1024 [MB] (20 MBps) [2024-12-13T18:19:43.478Z] Copying: 579/1024 [MB] (23 MBps) [2024-12-13T18:19:44.851Z] Copying: 599/1024 [MB] (20 MBps) [2024-12-13T18:19:45.784Z] Copying: 621/1024 [MB] (22 MBps) [2024-12-13T18:19:46.716Z] Copying: 642/1024 [MB] (20 MBps) [2024-12-13T18:19:47.649Z] Copying: 662/1024 [MB] (20 MBps) [2024-12-13T18:19:48.583Z] Copying: 683/1024 [MB] (20 MBps) [2024-12-13T18:19:49.517Z] Copying: 705/1024 [MB] (22 MBps) [2024-12-13T18:19:50.891Z] Copying: 730/1024 [MB] (25 MBps) [2024-12-13T18:19:51.823Z] Copying: 744/1024 [MB] (14 MBps) [2024-12-13T18:19:52.757Z] Copying: 765/1024 [MB] (21 MBps) [2024-12-13T18:19:53.699Z] Copying: 788/1024 [MB] (22 MBps) [2024-12-13T18:19:54.643Z] Copying: 808/1024 [MB] (19 MBps) [2024-12-13T18:19:55.582Z] Copying: 828/1024 [MB] (20 MBps) [2024-12-13T18:19:56.579Z] Copying: 848/1024 [MB] (20 MBps) [2024-12-13T18:19:57.519Z] Copying: 871/1024 [MB] (23 MBps) [2024-12-13T18:19:58.904Z] Copying: 892/1024 [MB] (20 MBps) [2024-12-13T18:19:59.840Z] Copying: 907/1024 [MB] (14 MBps) [2024-12-13T18:20:00.774Z] Copying: 926/1024 [MB] (18 MBps) [2024-12-13T18:20:01.707Z] Copying: 947/1024 [MB] (20 MBps) [2024-12-13T18:20:02.640Z] Copying: 969/1024 [MB] (22 MBps) [2024-12-13T18:20:03.575Z] Copying: 992/1024 [MB] (23 MBps) [2024-12-13T18:20:03.835Z] Copying: 1019/1024 [MB] (27 MBps) [2024-12-13T18:20:03.835Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:25:29.458 00:25:29.458 18:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:29.458 18:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:29.718 18:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:29.980 [2024-12-13 18:20:04.164996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.165036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:29.980 [2024-12-13 18:20:04.165048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:29.980 [2024-12-13 18:20:04.165055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.165075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.980 [2024-12-13 18:20:04.165500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.165517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.980 [2024-12-13 18:20:04.165525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:25:29.980 [2024-12-13 18:20:04.165532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.167493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.167522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.980 [2024-12-13 18:20:04.167530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.943 ms 00:25:29.980 [2024-12-13 18:20:04.167538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.182466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.182497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.980 [2024-12-13 18:20:04.182511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.914 ms 00:25:29.980 [2024-12-13 18:20:04.182518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.187281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.187306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:29.980 [2024-12-13 18:20:04.187317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:25:29.980 [2024-12-13 18:20:04.187325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.188632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.188751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:29.980 [2024-12-13 18:20:04.188763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:25:29.980 [2024-12-13 18:20:04.188770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.193207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.193240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:29.980 [2024-12-13 18:20:04.193266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.399 ms 00:25:29.980 [2024-12-13 18:20:04.193273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.193367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.193376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.980 [2024-12-13 18:20:04.193383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:29.980 [2024-12-13 18:20:04.193396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.195317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.195423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:29.980 [2024-12-13 18:20:04.195435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:25:29.980 [2024-12-13 18:20:04.195442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.196867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.196894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:29.980 [2024-12-13 18:20:04.196901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:25:29.980 [2024-12-13 18:20:04.196907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.197914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.197943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.980 [2024-12-13 18:20:04.197950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:25:29.980 [2024-12-13 18:20:04.197957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.198938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.980 [2024-12-13 18:20:04.199029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.980 [2024-12-13 18:20:04.199040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:25:29.980 [2024-12-13 18:20:04.199047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.980 [2024-12-13 18:20:04.199069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.980 [2024-12-13 18:20:04.199081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.980 [2024-12-13 18:20:04.199264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.981 [2024-12-13 18:20:04.199747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.981 [2024-12-13 18:20:04.199753] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7551228f-b775-4f19-b32c-c5319322e129 00:25:29.981 [2024-12-13 18:20:04.199761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:29.981 [2024-12-13 18:20:04.199766] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:29.981 [2024-12-13 18:20:04.199773] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:29.981 [2024-12-13 18:20:04.199778] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:29.981 [2024-12-13 18:20:04.199784] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.981 [2024-12-13 18:20:04.199791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.981 [2024-12-13 18:20:04.199798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.981 [2024-12-13 18:20:04.199802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.981 [2024-12-13 18:20:04.199813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.981 [2024-12-13 18:20:04.199819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.981 [2024-12-13 18:20:04.199825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.981 [2024-12-13 18:20:04.199831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:25:29.981 [2024-12-13 18:20:04.199843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.981 [2024-12-13 18:20:04.201089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.981 [2024-12-13 18:20:04.201108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.981 [2024-12-13 18:20:04.201114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:25:29.981 [2024-12-13 18:20:04.201122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.981 [2024-12-13 18:20:04.201197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.981 [2024-12-13 18:20:04.201208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.981 [2024-12-13 18:20:04.201214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:29.981 [2024-12-13 18:20:04.201221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.981 [2024-12-13 18:20:04.205669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.205769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.982 [2024-12-13 18:20:04.205780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.205787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.205832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.205842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.982 [2024-12-13 18:20:04.205848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.205856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.205894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.205905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.982 [2024-12-13 18:20:04.205911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.205917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.205930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.205938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.982 [2024-12-13 18:20:04.205943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.205952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.213794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.213830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.982 [2024-12-13 18:20:04.213837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.213844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.220442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.982 [2024-12-13 18:20:04.220455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.220464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.220530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.982 [2024-12-13 18:20:04.220537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.220544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.220578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.982 [2024-12-13 18:20:04.220584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.220591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.220652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.982 [2024-12-13 18:20:04.220658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.220666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.220697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.982 [2024-12-13 18:20:04.220703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.220710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.220750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.982 [2024-12-13 18:20:04.220756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.220763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.982 [2024-12-13 18:20:04.220806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.982 [2024-12-13 18:20:04.220818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.982 [2024-12-13 18:20:04.220825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.982 [2024-12-13 18:20:04.220936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.917 ms, result 0 00:25:29.982 true 00:25:29.982 18:20:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92642 00:25:29.982 18:20:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92642 00:25:29.982 18:20:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:29.982 [2024-12-13 18:20:04.308334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:29.982 [2024-12-13 18:20:04.308440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93390 ] 00:25:30.243 [2024-12-13 18:20:04.447979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.243 [2024-12-13 18:20:04.464554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.185  [2024-12-13T18:20:06.949Z] Copying: 259/1024 [MB] (259 MBps) [2024-12-13T18:20:07.521Z] Copying: 519/1024 [MB] (259 MBps) [2024-12-13T18:20:08.907Z] Copying: 777/1024 [MB] (258 MBps) [2024-12-13T18:20:08.907Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:25:34.530 00:25:34.530 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92642 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:34.530 18:20:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:34.530 [2024-12-13 18:20:08.671129] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:34.530 [2024-12-13 18:20:08.671226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93439 ] 00:25:34.530 [2024-12-13 18:20:08.810488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.530 [2024-12-13 18:20:08.826884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.790 [2024-12-13 18:20:08.908332] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.790 [2024-12-13 18:20:08.908390] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.790 [2024-12-13 18:20:08.970014] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:34.790 [2024-12-13 18:20:08.970272] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:34.790 [2024-12-13 18:20:08.970816] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:35.052 [2024-12-13 18:20:09.203781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.203817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.052 [2024-12-13 18:20:09.203827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:35.052 [2024-12-13 18:20:09.203836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.203873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.203880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.052 [2024-12-13 18:20:09.203889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:35.052 [2024-12-13 18:20:09.203896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.203911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.052 [2024-12-13 18:20:09.204084] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.052 [2024-12-13 18:20:09.204095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.204104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.052 [2024-12-13 18:20:09.204111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:25:35.052 [2024-12-13 18:20:09.204117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.205039] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.052 [2024-12-13 18:20:09.206986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.207010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.052 [2024-12-13 18:20:09.207017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:25:35.052 [2024-12-13 18:20:09.207022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.207065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.207074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.052 [2024-12-13 18:20:09.207080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:35.052 [2024-12-13 18:20:09.207085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.211403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.211484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.052 [2024-12-13 18:20:09.211536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.286 ms 00:25:35.052 [2024-12-13 18:20:09.211553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.211655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.211702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.052 [2024-12-13 18:20:09.211740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:35.052 [2024-12-13 18:20:09.211761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.211812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.211876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.052 [2024-12-13 18:20:09.211894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:35.052 [2024-12-13 18:20:09.211908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.211941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.052 [2024-12-13 18:20:09.213136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.213216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.052 [2024-12-13 18:20:09.213281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:25:35.052 [2024-12-13 18:20:09.213304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.213449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.213474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.052 [2024-12-13 18:20:09.213490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:35.052 [2024-12-13 18:20:09.213537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.213566] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.052 [2024-12-13 18:20:09.213592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.052 [2024-12-13 18:20:09.213662] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.052 [2024-12-13 18:20:09.213730] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.052 [2024-12-13 18:20:09.213830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.052 [2024-12-13 18:20:09.213910] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.052 [2024-12-13 18:20:09.213935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.052 [2024-12-13 18:20:09.213958] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.052 [2024-12-13 18:20:09.213981] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.052 [2024-12-13 18:20:09.214003] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.052 [2024-12-13 18:20:09.214043] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.052 [2024-12-13 18:20:09.214059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.052 [2024-12-13 18:20:09.214076] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.052 [2024-12-13 18:20:09.214091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.214105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.052 [2024-12-13 18:20:09.214120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:25:35.052 [2024-12-13 18:20:09.214136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.214212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.052 [2024-12-13 18:20:09.214234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.052 [2024-12-13 18:20:09.214264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:35.052 [2024-12-13 18:20:09.214279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.052 [2024-12-13 18:20:09.214365] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.052 [2024-12-13 18:20:09.214385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.053 [2024-12-13 18:20:09.214401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.053 [2024-12-13 18:20:09.214416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.053 [2024-12-13 18:20:09.214476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.053 [2024-12-13 18:20:09.214527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.053 [2024-12-13 18:20:09.214543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.053 [2024-12-13 18:20:09.214594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.053 [2024-12-13 18:20:09.214610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.053 [2024-12-13 18:20:09.214624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.053 [2024-12-13 18:20:09.214637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.053 [2024-12-13 18:20:09.214651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.053 [2024-12-13 18:20:09.214665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.053 [2024-12-13 18:20:09.214748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.053 [2024-12-13 18:20:09.214762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.053 [2024-12-13 18:20:09.214789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.053 [2024-12-13 18:20:09.214817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.053 [2024-12-13 18:20:09.214830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.053 [2024-12-13 18:20:09.214879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.053 [2024-12-13 18:20:09.214901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.053 [2024-12-13 18:20:09.214929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.053 [2024-12-13 18:20:09.214942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.053 [2024-12-13 18:20:09.214956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.053 [2024-12-13 18:20:09.214970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.053 [2024-12-13 18:20:09.215009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.053 [2024-12-13 18:20:09.215026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.053 [2024-12-13 18:20:09.215040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.053 [2024-12-13 18:20:09.215054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.053 [2024-12-13 18:20:09.215067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.053 [2024-12-13 18:20:09.215081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.053 [2024-12-13 18:20:09.215094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.053 [2024-12-13 18:20:09.215126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.053 [2024-12-13 18:20:09.215210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.053 [2024-12-13 18:20:09.215234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.053 [2024-12-13 18:20:09.215284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.053 [2024-12-13 18:20:09.215320] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.053 [2024-12-13 18:20:09.215337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.053 [2024-12-13 18:20:09.215380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.053 [2024-12-13 18:20:09.215402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.053 [2024-12-13 18:20:09.215420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.053 [2024-12-13 18:20:09.215434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.053 [2024-12-13 18:20:09.215473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.053 [2024-12-13 18:20:09.215489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.053 [2024-12-13 18:20:09.215503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.053 [2024-12-13 18:20:09.215517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.053 [2024-12-13 18:20:09.215550] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.053 [2024-12-13 18:20:09.215576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.053 [2024-12-13 18:20:09.215643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.053 [2024-12-13 18:20:09.215665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.053 [2024-12-13 18:20:09.215686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.053 [2024-12-13 18:20:09.215710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.053 [2024-12-13 18:20:09.215800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.053 [2024-12-13 18:20:09.215829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.053 [2024-12-13 18:20:09.215852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.053 [2024-12-13 18:20:09.215897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.053 [2024-12-13 18:20:09.215920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.053 [2024-12-13 18:20:09.215941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.053 [2024-12-13 18:20:09.215981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.053 [2024-12-13 18:20:09.216028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.053 [2024-12-13 18:20:09.216064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.053 [2024-12-13 18:20:09.216088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.053 [2024-12-13 18:20:09.216110] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.053 [2024-12-13 18:20:09.216156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.053 [2024-12-13 18:20:09.216181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.053 [2024-12-13 18:20:09.216202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.053 [2024-12-13 18:20:09.216249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.053 [2024-12-13 18:20:09.216261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.053 [2024-12-13 18:20:09.216267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.053 [2024-12-13 18:20:09.216274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.053 [2024-12-13 18:20:09.216280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:25:35.053 [2024-12-13 18:20:09.216286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.053 [2024-12-13 18:20:09.223902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.053 [2024-12-13 18:20:09.223920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.053 [2024-12-13 18:20:09.223928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.565 ms 00:25:35.053 [2024-12-13 18:20:09.223934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.053 [2024-12-13 18:20:09.223991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.053 [2024-12-13 18:20:09.223999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.053 [2024-12-13 18:20:09.224006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:35.053 [2024-12-13 18:20:09.224012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.053 [2024-12-13 18:20:09.239097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.053 [2024-12-13 18:20:09.239205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.053 [2024-12-13 18:20:09.239218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.049 ms 00:25:35.053 [2024-12-13 18:20:09.239224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.053 [2024-12-13 18:20:09.239269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.053 [2024-12-13 18:20:09.239277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.053 [2024-12-13 18:20:09.239284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:35.053 [2024-12-13 18:20:09.239292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.053 [2024-12-13 18:20:09.239607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.053 [2024-12-13 18:20:09.239619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.053 [2024-12-13 18:20:09.239626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:35.053 [2024-12-13 18:20:09.239631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.053 [2024-12-13 18:20:09.239725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.053 [2024-12-13 18:20:09.239734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.053 [2024-12-13 18:20:09.239740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:35.053 [2024-12-13 18:20:09.239748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.053 [2024-12-13 18:20:09.244780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.244805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.054 [2024-12-13 18:20:09.244823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:25:35.054 [2024-12-13 18:20:09.244832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.247301] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:35.054 [2024-12-13 18:20:09.247329] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.054 [2024-12-13 18:20:09.247346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.247356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.054 [2024-12-13 18:20:09.247366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:25:35.054 [2024-12-13 18:20:09.247375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.260520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.260636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.054 [2024-12-13 18:20:09.260649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.099 ms 00:25:35.054 [2024-12-13 18:20:09.260656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.262220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.262239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.054 [2024-12-13 18:20:09.262256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:25:35.054 [2024-12-13 18:20:09.262261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.263503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.263516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.054 [2024-12-13 18:20:09.263523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:25:35.054 [2024-12-13 18:20:09.263528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.263770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.263783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.054 [2024-12-13 18:20:09.263790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:25:35.054 [2024-12-13 18:20:09.263795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.277850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.278107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.054 [2024-12-13 18:20:09.278151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.043 ms 00:25:35.054 [2024-12-13 18:20:09.278170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.283961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:35.054 [2024-12-13 18:20:09.285891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.285963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.054 [2024-12-13 18:20:09.286013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.653 ms 00:25:35.054 [2024-12-13 18:20:09.286030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.286085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.286232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:35.054 [2024-12-13 18:20:09.286271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:35.054 [2024-12-13 18:20:09.286286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.286362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.286420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.054 [2024-12-13 18:20:09.286437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:35.054 [2024-12-13 18:20:09.286452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.286478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.286494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.054 [2024-12-13 18:20:09.286509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.054 [2024-12-13 18:20:09.286528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.286683] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:35.054 [2024-12-13 18:20:09.286709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.286726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:35.054 [2024-12-13 18:20:09.286742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:35.054 [2024-12-13 18:20:09.286797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.289634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.289712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.054 [2024-12-13 18:20:09.289761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:25:35.054 [2024-12-13 18:20:09.289782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.289841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.054 [2024-12-13 18:20:09.289916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.054 [2024-12-13 18:20:09.289938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:35.054 [2024-12-13 18:20:09.289952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.054 [2024-12-13 18:20:09.290791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.694 ms, result 0 00:25:35.999  [2024-12-13T18:20:11.320Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-13T18:20:12.709Z] Copying: 46/1024 [MB] (20 MBps) [2024-12-13T18:20:13.653Z] Copying: 61/1024 [MB] (15 MBps) [2024-12-13T18:20:14.600Z] Copying: 80/1024 [MB] (18 MBps) [2024-12-13T18:20:15.544Z] Copying: 95/1024 [MB] (15 MBps) [2024-12-13T18:20:16.488Z] Copying: 110/1024 [MB] (14 MBps) [2024-12-13T18:20:17.432Z] Copying: 125/1024 [MB] (15 MBps) [2024-12-13T18:20:18.375Z] Copying: 141/1024 [MB] (16 MBps) [2024-12-13T18:20:19.383Z] Copying: 158/1024 [MB] (16 MBps) [2024-12-13T18:20:20.326Z] Copying: 194/1024 [MB] (36 MBps) [2024-12-13T18:20:21.713Z] Copying: 228/1024 [MB] (33 MBps) [2024-12-13T18:20:22.658Z] Copying: 265/1024 [MB] (36 MBps) [2024-12-13T18:20:23.601Z] Copying: 284/1024 [MB] (19 MBps) [2024-12-13T18:20:24.545Z] Copying: 313/1024 [MB] (28 MBps) [2024-12-13T18:20:25.487Z] Copying: 323/1024 [MB] (10 MBps) [2024-12-13T18:20:26.430Z] Copying: 338/1024 [MB] (14 MBps) [2024-12-13T18:20:27.374Z] Copying: 354/1024 [MB] (16 MBps) [2024-12-13T18:20:28.319Z] Copying: 372/1024 [MB] (17 MBps) [2024-12-13T18:20:29.707Z] Copying: 387/1024 [MB] (15 MBps) [2024-12-13T18:20:30.649Z] Copying: 407/1024 [MB] (20 MBps) [2024-12-13T18:20:31.592Z] Copying: 427/1024 [MB] (20 MBps) [2024-12-13T18:20:32.535Z] Copying: 445/1024 [MB] (17 MBps) [2024-12-13T18:20:33.480Z] Copying: 464/1024 [MB] (19 MBps) [2024-12-13T18:20:34.424Z] Copying: 494/1024 [MB] (29 MBps) [2024-12-13T18:20:35.366Z] Copying: 506/1024 [MB] (11 MBps) [2024-12-13T18:20:36.310Z] Copying: 533/1024 [MB] (27 MBps) [2024-12-13T18:20:37.695Z] Copying: 570/1024 [MB] (36 MBps) [2024-12-13T18:20:38.637Z] Copying: 606/1024 [MB] (36 MBps) [2024-12-13T18:20:39.580Z] Copying: 619/1024 [MB] (13 MBps) [2024-12-13T18:20:40.523Z] Copying: 633/1024 [MB] (13 MBps) [2024-12-13T18:20:41.465Z] Copying: 649/1024 [MB] (16 MBps) [2024-12-13T18:20:42.489Z] Copying: 665/1024 [MB] (16 MBps) [2024-12-13T18:20:43.432Z] Copying: 680/1024 [MB] (14 MBps) [2024-12-13T18:20:44.377Z] Copying: 708/1024 [MB] (28 MBps) [2024-12-13T18:20:45.321Z] Copying: 733/1024 [MB] (24 MBps) [2024-12-13T18:20:46.707Z] Copying: 753/1024 [MB] (20 MBps) [2024-12-13T18:20:47.651Z] Copying: 773/1024 [MB] (19 MBps) [2024-12-13T18:20:48.596Z] Copying: 792/1024 [MB] (18 MBps) [2024-12-13T18:20:49.538Z] Copying: 811/1024 [MB] (19 MBps) [2024-12-13T18:20:50.480Z] Copying: 841/1024 [MB] (30 MBps) [2024-12-13T18:20:51.425Z] Copying: 861/1024 [MB] (19 MBps) [2024-12-13T18:20:52.369Z] Copying: 883/1024 [MB] (22 MBps) [2024-12-13T18:20:53.314Z] Copying: 899/1024 [MB] (16 MBps) [2024-12-13T18:20:54.699Z] Copying: 922/1024 [MB] (22 MBps) [2024-12-13T18:20:55.645Z] Copying: 941/1024 [MB] (18 MBps) [2024-12-13T18:20:56.587Z] Copying: 955/1024 [MB] (14 MBps) [2024-12-13T18:20:57.531Z] Copying: 975/1024 [MB] (19 MBps) [2024-12-13T18:20:58.474Z] Copying: 1005/1024 [MB] (29 MBps) [2024-12-13T18:20:59.418Z] Copying: 1023/1024 [MB] (17 MBps) [2024-12-13T18:20:59.418Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-13 18:20:59.270415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.041 [2024-12-13 18:20:59.270514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:25.041 [2024-12-13 18:20:59.270537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:25.041 [2024-12-13 18:20:59.270552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.041 [2024-12-13 18:20:59.277973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:25.041 [2024-12-13 18:20:59.282083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.041 [2024-12-13 18:20:59.282401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:25.041 [2024-12-13 18:20:59.282563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.680 ms 00:26:25.041 [2024-12-13 18:20:59.282620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.041 [2024-12-13 18:20:59.295101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.041 [2024-12-13 18:20:59.295154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:25.041 [2024-12-13 18:20:59.295169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.130 ms 00:26:25.041 [2024-12-13 18:20:59.295179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.041 [2024-12-13 18:20:59.321811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.041 [2024-12-13 18:20:59.322012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:25.041 [2024-12-13 18:20:59.322035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.612 ms 00:26:25.041 [2024-12-13 18:20:59.322044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.041 [2024-12-13 18:20:59.328340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.041 [2024-12-13 18:20:59.328525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:25.041 [2024-12-13 18:20:59.328545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.176 ms 00:26:25.041 [2024-12-13 18:20:59.328554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.041 [2024-12-13 18:20:59.330540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.041 [2024-12-13 18:20:59.330589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:25.041 [2024-12-13 18:20:59.330601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:26:25.041 [2024-12-13 18:20:59.330611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.041 [2024-12-13 18:20:59.335651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.041 [2024-12-13 18:20:59.335832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:25.041 [2024-12-13 18:20:59.335850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.995 ms 00:26:25.041 [2024-12-13 18:20:59.335858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.304 [2024-12-13 18:20:59.498875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.304 [2024-12-13 18:20:59.499044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:25.304 [2024-12-13 18:20:59.499065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 162.973 ms 00:26:25.304 [2024-12-13 18:20:59.499073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.304 [2024-12-13 18:20:59.501671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.304 [2024-12-13 18:20:59.501725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:25.304 [2024-12-13 18:20:59.501735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:26:25.304 [2024-12-13 18:20:59.501742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.304 [2024-12-13 18:20:59.503866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.304 [2024-12-13 18:20:59.504022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:25.304 [2024-12-13 18:20:59.504084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:26:25.304 [2024-12-13 18:20:59.504169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.304 [2024-12-13 18:20:59.505818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.304 [2024-12-13 18:20:59.505982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:25.304 [2024-12-13 18:20:59.506041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:26:25.304 [2024-12-13 18:20:59.506065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.304 [2024-12-13 18:20:59.507721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.304 [2024-12-13 18:20:59.507872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:25.304 [2024-12-13 18:20:59.507889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:26:25.304 [2024-12-13 18:20:59.507896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.304 [2024-12-13 18:20:59.507929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:25.304 [2024-12-13 18:20:59.507958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104960 / 261120 wr_cnt: 1 state: open 00:26:25.304 [2024-12-13 18:20:59.507968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.507976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.507985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.507993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:25.304 [2024-12-13 18:20:59.508102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:25.305 [2024-12-13 18:20:59.508751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:25.306 [2024-12-13 18:20:59.508767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:25.306 [2024-12-13 18:20:59.508780] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7551228f-b775-4f19-b32c-c5319322e129 00:26:25.306 [2024-12-13 18:20:59.508788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104960 00:26:25.306 [2024-12-13 18:20:59.508800] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105920 00:26:25.306 [2024-12-13 18:20:59.508807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104960 00:26:25.306 [2024-12-13 18:20:59.508820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:26:25.306 [2024-12-13 18:20:59.508827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:25.306 [2024-12-13 18:20:59.508835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:25.306 [2024-12-13 18:20:59.508843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:25.306 [2024-12-13 18:20:59.508850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:25.306 [2024-12-13 18:20:59.508857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:25.306 [2024-12-13 18:20:59.508865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.306 [2024-12-13 18:20:59.508872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:25.306 [2024-12-13 18:20:59.508884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:26:25.306 [2024-12-13 18:20:59.508892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.511447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.306 [2024-12-13 18:20:59.511479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:25.306 [2024-12-13 18:20:59.511489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:26:25.306 [2024-12-13 18:20:59.511497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.511633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.306 [2024-12-13 18:20:59.511642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:25.306 [2024-12-13 18:20:59.511652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:26:25.306 [2024-12-13 18:20:59.511660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.519600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.519765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.306 [2024-12-13 18:20:59.519820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.519843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.519921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.519944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.306 [2024-12-13 18:20:59.519964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.519988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.520067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.520184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.306 [2024-12-13 18:20:59.520205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.520225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.520268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.520296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.306 [2024-12-13 18:20:59.520315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.520472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.535427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.535595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.306 [2024-12-13 18:20:59.535650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.535673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.546348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.546518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.306 [2024-12-13 18:20:59.546573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.546596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.546665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.546687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.306 [2024-12-13 18:20:59.546708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.546728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.546775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.546797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.306 [2024-12-13 18:20:59.546871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.546894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.546989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.547014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.306 [2024-12-13 18:20:59.547034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.547096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.547148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.547172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.306 [2024-12-13 18:20:59.547193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.547218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.547284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.547329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.306 [2024-12-13 18:20:59.547349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.547376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.547435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.306 [2024-12-13 18:20:59.547485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.306 [2024-12-13 18:20:59.547510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.306 [2024-12-13 18:20:59.547535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.306 [2024-12-13 18:20:59.547682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 283.231 ms, result 0 00:26:25.878 00:26:25.878 00:26:25.878 18:21:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:28.426 18:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:28.426 [2024-12-13 18:21:02.560532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:28.426 [2024-12-13 18:21:02.560870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93994 ] 00:26:28.426 [2024-12-13 18:21:02.708235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.426 [2024-12-13 18:21:02.736944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.689 [2024-12-13 18:21:02.847575] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:28.689 [2024-12-13 18:21:02.847916] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:28.689 [2024-12-13 18:21:03.010063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.010130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:28.689 [2024-12-13 18:21:03.010147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:28.689 [2024-12-13 18:21:03.010156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.010226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.010238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.689 [2024-12-13 18:21:03.010274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:28.689 [2024-12-13 18:21:03.010293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.010323] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:28.689 [2024-12-13 18:21:03.010618] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:28.689 [2024-12-13 18:21:03.010636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.010645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.689 [2024-12-13 18:21:03.010657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:26:28.689 [2024-12-13 18:21:03.010671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.012490] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:28.689 [2024-12-13 18:21:03.016122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.016175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:28.689 [2024-12-13 18:21:03.016188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:26:28.689 [2024-12-13 18:21:03.016206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.016303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.016314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:28.689 [2024-12-13 18:21:03.016329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:28.689 [2024-12-13 18:21:03.016338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.024514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.024558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.689 [2024-12-13 18:21:03.024574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.128 ms 00:26:28.689 [2024-12-13 18:21:03.024581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.024697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.024707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.689 [2024-12-13 18:21:03.024717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:28.689 [2024-12-13 18:21:03.024724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.024781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.024792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:28.689 [2024-12-13 18:21:03.024800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:28.689 [2024-12-13 18:21:03.024812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.024833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:28.689 [2024-12-13 18:21:03.026898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.026937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.689 [2024-12-13 18:21:03.026948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:26:28.689 [2024-12-13 18:21:03.026956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.026995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.027008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:28.689 [2024-12-13 18:21:03.027016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:28.689 [2024-12-13 18:21:03.027027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.027049] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:28.689 [2024-12-13 18:21:03.027071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:28.689 [2024-12-13 18:21:03.027124] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:28.689 [2024-12-13 18:21:03.027139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:28.689 [2024-12-13 18:21:03.027267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:28.689 [2024-12-13 18:21:03.027279] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:28.689 [2024-12-13 18:21:03.027298] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:28.689 [2024-12-13 18:21:03.027313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:28.689 [2024-12-13 18:21:03.027322] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:28.689 [2024-12-13 18:21:03.027330] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:28.689 [2024-12-13 18:21:03.027338] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:28.689 [2024-12-13 18:21:03.027345] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:28.689 [2024-12-13 18:21:03.027353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:28.689 [2024-12-13 18:21:03.027361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.027368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:28.689 [2024-12-13 18:21:03.027376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:26:28.689 [2024-12-13 18:21:03.027390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.027476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.689 [2024-12-13 18:21:03.027484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:28.689 [2024-12-13 18:21:03.027496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:28.689 [2024-12-13 18:21:03.027503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.689 [2024-12-13 18:21:03.027601] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:28.689 [2024-12-13 18:21:03.027616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:28.689 [2024-12-13 18:21:03.027631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:28.689 [2024-12-13 18:21:03.027640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:28.689 [2024-12-13 18:21:03.027658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:28.689 [2024-12-13 18:21:03.027674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:28.689 [2024-12-13 18:21:03.027682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:28.689 [2024-12-13 18:21:03.027698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:28.689 [2024-12-13 18:21:03.027706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:28.689 [2024-12-13 18:21:03.027715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:28.689 [2024-12-13 18:21:03.027723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:28.689 [2024-12-13 18:21:03.027731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:28.689 [2024-12-13 18:21:03.027739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:28.689 [2024-12-13 18:21:03.027755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:28.689 [2024-12-13 18:21:03.027765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:28.689 [2024-12-13 18:21:03.027781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.689 [2024-12-13 18:21:03.027797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:28.689 [2024-12-13 18:21:03.027805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.689 [2024-12-13 18:21:03.027819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:28.689 [2024-12-13 18:21:03.027827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:28.689 [2024-12-13 18:21:03.027835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.690 [2024-12-13 18:21:03.027843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:28.690 [2024-12-13 18:21:03.027851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:28.690 [2024-12-13 18:21:03.027859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:28.690 [2024-12-13 18:21:03.027866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:28.690 [2024-12-13 18:21:03.027874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:28.690 [2024-12-13 18:21:03.027882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:28.690 [2024-12-13 18:21:03.027892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:28.690 [2024-12-13 18:21:03.027900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:28.690 [2024-12-13 18:21:03.027908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:28.690 [2024-12-13 18:21:03.027916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:28.690 [2024-12-13 18:21:03.027924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:28.690 [2024-12-13 18:21:03.027932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.690 [2024-12-13 18:21:03.027940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:28.690 [2024-12-13 18:21:03.027947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:28.690 [2024-12-13 18:21:03.027953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.690 [2024-12-13 18:21:03.027960] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:28.690 [2024-12-13 18:21:03.027971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:28.690 [2024-12-13 18:21:03.027979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:28.690 [2024-12-13 18:21:03.027990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:28.690 [2024-12-13 18:21:03.028003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:28.690 [2024-12-13 18:21:03.028011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:28.690 [2024-12-13 18:21:03.028018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:28.690 [2024-12-13 18:21:03.028028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:28.690 [2024-12-13 18:21:03.028035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:28.690 [2024-12-13 18:21:03.028042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:28.690 [2024-12-13 18:21:03.028051] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:28.690 [2024-12-13 18:21:03.028060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.690 [2024-12-13 18:21:03.028069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:28.690 [2024-12-13 18:21:03.028076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:28.690 [2024-12-13 18:21:03.028084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:28.690 [2024-12-13 18:21:03.028091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:28.690 [2024-12-13 18:21:03.028099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:28.690 [2024-12-13 18:21:03.028106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:28.690 [2024-12-13 18:21:03.028114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:28.690 [2024-12-13 18:21:03.028121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:28.690 [2024-12-13 18:21:03.028129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:28.690 [2024-12-13 18:21:03.028142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:28.690 [2024-12-13 18:21:03.028150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:28.690 [2024-12-13 18:21:03.028161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:28.690 [2024-12-13 18:21:03.028168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:28.690 [2024-12-13 18:21:03.028176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:28.690 [2024-12-13 18:21:03.028183] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:28.690 [2024-12-13 18:21:03.028192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.690 [2024-12-13 18:21:03.028199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:28.690 [2024-12-13 18:21:03.028206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:28.690 [2024-12-13 18:21:03.028213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:28.690 [2024-12-13 18:21:03.028220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:28.690 [2024-12-13 18:21:03.028227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.690 [2024-12-13 18:21:03.028236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:28.690 [2024-12-13 18:21:03.028259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:26:28.690 [2024-12-13 18:21:03.028270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.690 [2024-12-13 18:21:03.041442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.690 [2024-12-13 18:21:03.041489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.690 [2024-12-13 18:21:03.041508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.124 ms 00:26:28.690 [2024-12-13 18:21:03.041521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.690 [2024-12-13 18:21:03.041614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.690 [2024-12-13 18:21:03.041624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:28.690 [2024-12-13 18:21:03.041633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:28.690 [2024-12-13 18:21:03.041641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.690 [2024-12-13 18:21:03.062024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.690 [2024-12-13 18:21:03.062079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.690 [2024-12-13 18:21:03.062097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.319 ms 00:26:28.690 [2024-12-13 18:21:03.062109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.690 [2024-12-13 18:21:03.062164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.690 [2024-12-13 18:21:03.062179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.690 [2024-12-13 18:21:03.062192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:28.690 [2024-12-13 18:21:03.062203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.062653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.062695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.952 [2024-12-13 18:21:03.062710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:26:28.952 [2024-12-13 18:21:03.062723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.062910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.062929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.952 [2024-12-13 18:21:03.062943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:26:28.952 [2024-12-13 18:21:03.062961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.069237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.069292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.952 [2024-12-13 18:21:03.069307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.248 ms 00:26:28.952 [2024-12-13 18:21:03.069318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.072487] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:28.952 [2024-12-13 18:21:03.072661] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:28.952 [2024-12-13 18:21:03.072690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.072702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:28.952 [2024-12-13 18:21:03.072714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:26:28.952 [2024-12-13 18:21:03.072724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.087813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.087845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:28.952 [2024-12-13 18:21:03.087863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.997 ms 00:26:28.952 [2024-12-13 18:21:03.087871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.090160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.090199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:28.952 [2024-12-13 18:21:03.090207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.248 ms 00:26:28.952 [2024-12-13 18:21:03.090214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.092048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.092078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:28.952 [2024-12-13 18:21:03.092087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:26:28.952 [2024-12-13 18:21:03.092094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.092426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.092438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:28.952 [2024-12-13 18:21:03.092447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:26:28.952 [2024-12-13 18:21:03.092454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.109264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.109416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:28.952 [2024-12-13 18:21:03.109433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.783 ms 00:26:28.952 [2024-12-13 18:21:03.109448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.952 [2024-12-13 18:21:03.116863] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:28.952 [2024-12-13 18:21:03.119402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.952 [2024-12-13 18:21:03.119432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:28.953 [2024-12-13 18:21:03.119444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.920 ms 00:26:28.953 [2024-12-13 18:21:03.119459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.953 [2024-12-13 18:21:03.119511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.953 [2024-12-13 18:21:03.119522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:28.953 [2024-12-13 18:21:03.119536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:28.953 [2024-12-13 18:21:03.119550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.953 [2024-12-13 18:21:03.120994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.953 [2024-12-13 18:21:03.121027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:28.953 [2024-12-13 18:21:03.121036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:26:28.953 [2024-12-13 18:21:03.121043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.953 [2024-12-13 18:21:03.121065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.953 [2024-12-13 18:21:03.121073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:28.953 [2024-12-13 18:21:03.121081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:28.953 [2024-12-13 18:21:03.121088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.953 [2024-12-13 18:21:03.121122] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:28.953 [2024-12-13 18:21:03.121131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.953 [2024-12-13 18:21:03.121138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:28.953 [2024-12-13 18:21:03.121149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:28.953 [2024-12-13 18:21:03.121158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.953 [2024-12-13 18:21:03.124716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.953 [2024-12-13 18:21:03.124749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:28.953 [2024-12-13 18:21:03.124758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.540 ms 00:26:28.953 [2024-12-13 18:21:03.124765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.953 [2024-12-13 18:21:03.124836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.953 [2024-12-13 18:21:03.124845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:28.953 [2024-12-13 18:21:03.124853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:28.953 [2024-12-13 18:21:03.124863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.953 [2024-12-13 18:21:03.125797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.337 ms, result 0 00:26:30.339  [2024-12-13T18:21:05.331Z] Copying: 1008/1048576 [kB] (1008 kBps) [2024-12-13T18:21:06.732Z] Copying: 4340/1048576 [kB] (3332 kBps) [2024-12-13T18:21:07.674Z] Copying: 25/1024 [MB] (21 MBps) [2024-12-13T18:21:08.613Z] Copying: 54/1024 [MB] (28 MBps) [2024-12-13T18:21:09.553Z] Copying: 83/1024 [MB] (29 MBps) [2024-12-13T18:21:10.495Z] Copying: 111/1024 [MB] (27 MBps) [2024-12-13T18:21:11.438Z] Copying: 138/1024 [MB] (26 MBps) [2024-12-13T18:21:12.377Z] Copying: 163/1024 [MB] (25 MBps) [2024-12-13T18:21:13.317Z] Copying: 188/1024 [MB] (24 MBps) [2024-12-13T18:21:14.705Z] Copying: 211/1024 [MB] (23 MBps) [2024-12-13T18:21:15.643Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-13T18:21:16.577Z] Copying: 250/1024 [MB] (15 MBps) [2024-12-13T18:21:17.521Z] Copying: 270/1024 [MB] (20 MBps) [2024-12-13T18:21:18.457Z] Copying: 287/1024 [MB] (17 MBps) [2024-12-13T18:21:19.392Z] Copying: 305/1024 [MB] (17 MBps) [2024-12-13T18:21:20.334Z] Copying: 326/1024 [MB] (20 MBps) [2024-12-13T18:21:21.719Z] Copying: 342/1024 [MB] (16 MBps) [2024-12-13T18:21:22.659Z] Copying: 358/1024 [MB] (16 MBps) [2024-12-13T18:21:23.595Z] Copying: 382/1024 [MB] (23 MBps) [2024-12-13T18:21:24.536Z] Copying: 397/1024 [MB] (15 MBps) [2024-12-13T18:21:25.470Z] Copying: 413/1024 [MB] (16 MBps) [2024-12-13T18:21:26.407Z] Copying: 432/1024 [MB] (18 MBps) [2024-12-13T18:21:27.350Z] Copying: 451/1024 [MB] (18 MBps) [2024-12-13T18:21:28.756Z] Copying: 466/1024 [MB] (15 MBps) [2024-12-13T18:21:29.350Z] Copying: 484/1024 [MB] (17 MBps) [2024-12-13T18:21:30.728Z] Copying: 502/1024 [MB] (17 MBps) [2024-12-13T18:21:31.664Z] Copying: 529/1024 [MB] (27 MBps) [2024-12-13T18:21:32.605Z] Copying: 550/1024 [MB] (20 MBps) [2024-12-13T18:21:33.545Z] Copying: 568/1024 [MB] (18 MBps) [2024-12-13T18:21:34.491Z] Copying: 586/1024 [MB] (17 MBps) [2024-12-13T18:21:35.430Z] Copying: 606/1024 [MB] (20 MBps) [2024-12-13T18:21:36.371Z] Copying: 636/1024 [MB] (29 MBps) [2024-12-13T18:21:37.757Z] Copying: 663/1024 [MB] (27 MBps) [2024-12-13T18:21:38.328Z] Copying: 682/1024 [MB] (18 MBps) [2024-12-13T18:21:39.712Z] Copying: 704/1024 [MB] (21 MBps) [2024-12-13T18:21:40.658Z] Copying: 720/1024 [MB] (16 MBps) [2024-12-13T18:21:41.600Z] Copying: 735/1024 [MB] (15 MBps) [2024-12-13T18:21:42.544Z] Copying: 751/1024 [MB] (15 MBps) [2024-12-13T18:21:43.488Z] Copying: 766/1024 [MB] (15 MBps) [2024-12-13T18:21:44.432Z] Copying: 782/1024 [MB] (15 MBps) [2024-12-13T18:21:45.375Z] Copying: 798/1024 [MB] (15 MBps) [2024-12-13T18:21:46.318Z] Copying: 825/1024 [MB] (26 MBps) [2024-12-13T18:21:47.707Z] Copying: 842/1024 [MB] (17 MBps) [2024-12-13T18:21:48.652Z] Copying: 860/1024 [MB] (18 MBps) [2024-12-13T18:21:49.598Z] Copying: 892/1024 [MB] (31 MBps) [2024-12-13T18:21:50.544Z] Copying: 907/1024 [MB] (15 MBps) [2024-12-13T18:21:51.510Z] Copying: 923/1024 [MB] (15 MBps) [2024-12-13T18:21:52.529Z] Copying: 939/1024 [MB] (15 MBps) [2024-12-13T18:21:53.474Z] Copying: 964/1024 [MB] (25 MBps) [2024-12-13T18:21:54.418Z] Copying: 981/1024 [MB] (16 MBps) [2024-12-13T18:21:54.680Z] Copying: 1013/1024 [MB] (32 MBps) [2024-12-13T18:21:54.943Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-13 18:21:54.811388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.811450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:20.566 [2024-12-13 18:21:54.811465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:20.566 [2024-12-13 18:21:54.811479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.811501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:20.566 [2024-12-13 18:21:54.812045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.812064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:20.566 [2024-12-13 18:21:54.812074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:27:20.566 [2024-12-13 18:21:54.812081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.812595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.812637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:20.566 [2024-12-13 18:21:54.812749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:27:20.566 [2024-12-13 18:21:54.812773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.822727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.822834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:20.566 [2024-12-13 18:21:54.822888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.925 ms 00:27:20.566 [2024-12-13 18:21:54.822906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.828029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.828118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:20.566 [2024-12-13 18:21:54.828162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.089 ms 00:27:20.566 [2024-12-13 18:21:54.828180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.829359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.829445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:20.566 [2024-12-13 18:21:54.829488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:27:20.566 [2024-12-13 18:21:54.829505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.832902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.833002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:20.566 [2024-12-13 18:21:54.833014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.364 ms 00:27:20.566 [2024-12-13 18:21:54.833020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.834647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.834664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:20.566 [2024-12-13 18:21:54.834671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:27:20.566 [2024-12-13 18:21:54.834677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.836547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.836627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:20.566 [2024-12-13 18:21:54.836666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:27:20.566 [2024-12-13 18:21:54.836684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.837966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.838046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:20.566 [2024-12-13 18:21:54.838083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:27:20.566 [2024-12-13 18:21:54.838100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.839978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.840094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:20.566 [2024-12-13 18:21:54.840137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:27:20.566 [2024-12-13 18:21:54.840155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.566 [2024-12-13 18:21:54.842702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.566 [2024-12-13 18:21:54.842999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:20.566 [2024-12-13 18:21:54.843216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.483 ms 00:27:20.567 [2024-12-13 18:21:54.843330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.567 [2024-12-13 18:21:54.843441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:20.567 [2024-12-13 18:21:54.843603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:20.567 [2024-12-13 18:21:54.843796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:20.567 [2024-12-13 18:21:54.843888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.844886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.845053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.845275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.845368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.845449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.845655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.845791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.845933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.846988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.847992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:20.567 [2024-12-13 18:21:54.848176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:20.568 [2024-12-13 18:21:54.848522] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:20.568 [2024-12-13 18:21:54.848560] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7551228f-b775-4f19-b32c-c5319322e129 00:27:20.568 [2024-12-13 18:21:54.848588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:20.568 [2024-12-13 18:21:54.848606] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 159680 00:27:20.568 [2024-12-13 18:21:54.848623] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 157696 00:27:20.568 [2024-12-13 18:21:54.848645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:27:20.568 [2024-12-13 18:21:54.848663] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:20.568 [2024-12-13 18:21:54.848683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:20.568 [2024-12-13 18:21:54.848702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:20.568 [2024-12-13 18:21:54.848719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:20.568 [2024-12-13 18:21:54.848736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:20.568 [2024-12-13 18:21:54.848756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.568 [2024-12-13 18:21:54.848777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:20.568 [2024-12-13 18:21:54.848797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.317 ms 00:27:20.568 [2024-12-13 18:21:54.848818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.851631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.568 [2024-12-13 18:21:54.851836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:20.568 [2024-12-13 18:21:54.852007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:27:20.568 [2024-12-13 18:21:54.852088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.852283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.568 [2024-12-13 18:21:54.852349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:20.568 [2024-12-13 18:21:54.852510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:27:20.568 [2024-12-13 18:21:54.852531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.857422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.857521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:20.568 [2024-12-13 18:21:54.857566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.857588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.857648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.857669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:20.568 [2024-12-13 18:21:54.857694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.857713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.857760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.857847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:20.568 [2024-12-13 18:21:54.857867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.857885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.857911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.857930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:20.568 [2024-12-13 18:21:54.857949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.858001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.866937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.867059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:20.568 [2024-12-13 18:21:54.867105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.867127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.874324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.874443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.568 [2024-12-13 18:21:54.874496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.874520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.874576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.874599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.568 [2024-12-13 18:21:54.874618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.874636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.874675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.874696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.568 [2024-12-13 18:21:54.874716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.874760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.874845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.874868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.568 [2024-12-13 18:21:54.874888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.874907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.874949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.875022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:20.568 [2024-12-13 18:21:54.875041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.875059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.875107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.875132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.568 [2024-12-13 18:21:54.875182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.875203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.875298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.568 [2024-12-13 18:21:54.875329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.568 [2024-12-13 18:21:54.875388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.568 [2024-12-13 18:21:54.875409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.568 [2024-12-13 18:21:54.875555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.136 ms, result 0 00:27:21.140 00:27:21.140 00:27:21.140 18:21:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:23.055 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:23.055 18:21:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:23.055 [2024-12-13 18:21:57.246348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:23.055 [2024-12-13 18:21:57.246441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94553 ] 00:27:23.055 [2024-12-13 18:21:57.388436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.055 [2024-12-13 18:21:57.409608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.317 [2024-12-13 18:21:57.505878] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:23.317 [2024-12-13 18:21:57.505950] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:23.317 [2024-12-13 18:21:57.667462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.317 [2024-12-13 18:21:57.667747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:23.317 [2024-12-13 18:21:57.667772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:23.317 [2024-12-13 18:21:57.667782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.317 [2024-12-13 18:21:57.667854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.667874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:23.318 [2024-12-13 18:21:57.667888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:23.318 [2024-12-13 18:21:57.667902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.667934] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:23.318 [2024-12-13 18:21:57.668190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:23.318 [2024-12-13 18:21:57.668206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.668215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:23.318 [2024-12-13 18:21:57.668228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:27:23.318 [2024-12-13 18:21:57.668238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.670043] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:23.318 [2024-12-13 18:21:57.674124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.674179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:23.318 [2024-12-13 18:21:57.674191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.083 ms 00:27:23.318 [2024-12-13 18:21:57.674207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.674319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.674331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:23.318 [2024-12-13 18:21:57.674342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:23.318 [2024-12-13 18:21:57.674350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.682746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.682793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:23.318 [2024-12-13 18:21:57.682812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.339 ms 00:27:23.318 [2024-12-13 18:21:57.682821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.682933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.682943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:23.318 [2024-12-13 18:21:57.682954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:23.318 [2024-12-13 18:21:57.682962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.683023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.683035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:23.318 [2024-12-13 18:21:57.683044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:23.318 [2024-12-13 18:21:57.683056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.683078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:23.318 [2024-12-13 18:21:57.685066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.685111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:23.318 [2024-12-13 18:21:57.685125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.992 ms 00:27:23.318 [2024-12-13 18:21:57.685133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.685172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.685181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:23.318 [2024-12-13 18:21:57.685190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:23.318 [2024-12-13 18:21:57.685203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.685225] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:23.318 [2024-12-13 18:21:57.685270] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:23.318 [2024-12-13 18:21:57.685313] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:23.318 [2024-12-13 18:21:57.685330] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:23.318 [2024-12-13 18:21:57.685436] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:23.318 [2024-12-13 18:21:57.685449] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:23.318 [2024-12-13 18:21:57.685466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:23.318 [2024-12-13 18:21:57.685478] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:23.318 [2024-12-13 18:21:57.685487] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:23.318 [2024-12-13 18:21:57.685495] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:23.318 [2024-12-13 18:21:57.685503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:23.318 [2024-12-13 18:21:57.685510] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:23.318 [2024-12-13 18:21:57.685523] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:23.318 [2024-12-13 18:21:57.685531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.685542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:23.318 [2024-12-13 18:21:57.685553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:27:23.318 [2024-12-13 18:21:57.685567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.685652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.318 [2024-12-13 18:21:57.685662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:23.318 [2024-12-13 18:21:57.685670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:23.318 [2024-12-13 18:21:57.685677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.318 [2024-12-13 18:21:57.685773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:23.318 [2024-12-13 18:21:57.685786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:23.318 [2024-12-13 18:21:57.685797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:23.318 [2024-12-13 18:21:57.685806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.318 [2024-12-13 18:21:57.685815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:23.318 [2024-12-13 18:21:57.685823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:23.318 [2024-12-13 18:21:57.685832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:23.318 [2024-12-13 18:21:57.685842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:23.318 [2024-12-13 18:21:57.685852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:23.318 [2024-12-13 18:21:57.685862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:23.318 [2024-12-13 18:21:57.685871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:23.318 [2024-12-13 18:21:57.685879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:23.318 [2024-12-13 18:21:57.685888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:23.318 [2024-12-13 18:21:57.685897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:23.318 [2024-12-13 18:21:57.685905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:23.318 [2024-12-13 18:21:57.685914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.318 [2024-12-13 18:21:57.685926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:23.318 [2024-12-13 18:21:57.685936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:23.318 [2024-12-13 18:21:57.685943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.318 [2024-12-13 18:21:57.685951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:23.318 [2024-12-13 18:21:57.685959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:23.318 [2024-12-13 18:21:57.685967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.318 [2024-12-13 18:21:57.685975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:23.318 [2024-12-13 18:21:57.685983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:23.318 [2024-12-13 18:21:57.685990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.318 [2024-12-13 18:21:57.686002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:23.318 [2024-12-13 18:21:57.686010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:23.318 [2024-12-13 18:21:57.686017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.318 [2024-12-13 18:21:57.686026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:23.318 [2024-12-13 18:21:57.686034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:23.318 [2024-12-13 18:21:57.686041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.318 [2024-12-13 18:21:57.686049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:23.318 [2024-12-13 18:21:57.686057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:23.318 [2024-12-13 18:21:57.686064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:23.318 [2024-12-13 18:21:57.686071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:23.318 [2024-12-13 18:21:57.686080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:23.318 [2024-12-13 18:21:57.686087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:23.318 [2024-12-13 18:21:57.686093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:23.318 [2024-12-13 18:21:57.686100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:23.318 [2024-12-13 18:21:57.686106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.318 [2024-12-13 18:21:57.686113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:23.318 [2024-12-13 18:21:57.686121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:23.318 [2024-12-13 18:21:57.686130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.318 [2024-12-13 18:21:57.686136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:23.318 [2024-12-13 18:21:57.686146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:23.318 [2024-12-13 18:21:57.686153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:23.318 [2024-12-13 18:21:57.686164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.319 [2024-12-13 18:21:57.686172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:23.319 [2024-12-13 18:21:57.686181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:23.319 [2024-12-13 18:21:57.686188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:23.319 [2024-12-13 18:21:57.686195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:23.319 [2024-12-13 18:21:57.686201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:23.319 [2024-12-13 18:21:57.686208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:23.319 [2024-12-13 18:21:57.686218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:23.319 [2024-12-13 18:21:57.686228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.319 [2024-12-13 18:21:57.686236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:23.319 [2024-12-13 18:21:57.686534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:23.319 [2024-12-13 18:21:57.686596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:23.319 [2024-12-13 18:21:57.686627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:23.319 [2024-12-13 18:21:57.686655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:23.319 [2024-12-13 18:21:57.686684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:23.319 [2024-12-13 18:21:57.686715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:23.319 [2024-12-13 18:21:57.686744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:23.319 [2024-12-13 18:21:57.686771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:23.319 [2024-12-13 18:21:57.686808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:23.319 [2024-12-13 18:21:57.686837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:23.319 [2024-12-13 18:21:57.687158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:23.319 [2024-12-13 18:21:57.687196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:23.319 [2024-12-13 18:21:57.687225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:23.319 [2024-12-13 18:21:57.687274] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:23.319 [2024-12-13 18:21:57.687306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.319 [2024-12-13 18:21:57.687346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:23.319 [2024-12-13 18:21:57.687376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:23.319 [2024-12-13 18:21:57.687407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:23.319 [2024-12-13 18:21:57.687440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:23.319 [2024-12-13 18:21:57.687472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.319 [2024-12-13 18:21:57.687492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:23.319 [2024-12-13 18:21:57.687513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:27:23.319 [2024-12-13 18:21:57.687596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.701496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.701652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:23.581 [2024-12-13 18:21:57.701715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.804 ms 00:27:23.581 [2024-12-13 18:21:57.701739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.701845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.701858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:23.581 [2024-12-13 18:21:57.701867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:23.581 [2024-12-13 18:21:57.701875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.736067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.736158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:23.581 [2024-12-13 18:21:57.736180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.129 ms 00:27:23.581 [2024-12-13 18:21:57.736194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.736310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.736340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:23.581 [2024-12-13 18:21:57.736367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:23.581 [2024-12-13 18:21:57.736387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.737109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.737169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:23.581 [2024-12-13 18:21:57.737187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:27:23.581 [2024-12-13 18:21:57.737202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.737470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.737495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:23.581 [2024-12-13 18:21:57.737510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:27:23.581 [2024-12-13 18:21:57.737523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.746884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.747086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:23.581 [2024-12-13 18:21:57.747106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.318 ms 00:27:23.581 [2024-12-13 18:21:57.747114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.751428] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:23.581 [2024-12-13 18:21:57.751486] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:23.581 [2024-12-13 18:21:57.751503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.751512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:23.581 [2024-12-13 18:21:57.751521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:27:23.581 [2024-12-13 18:21:57.751530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.768065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.768117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:23.581 [2024-12-13 18:21:57.768131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.476 ms 00:27:23.581 [2024-12-13 18:21:57.768149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.771455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.771504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:23.581 [2024-12-13 18:21:57.771513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:27:23.581 [2024-12-13 18:21:57.771521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.774488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.774538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:23.581 [2024-12-13 18:21:57.774549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:27:23.581 [2024-12-13 18:21:57.774568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.774909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.774930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:23.581 [2024-12-13 18:21:57.774940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:27:23.581 [2024-12-13 18:21:57.774955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.800682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.800737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:23.581 [2024-12-13 18:21:57.800750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.704 ms 00:27:23.581 [2024-12-13 18:21:57.800759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.808954] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:23.581 [2024-12-13 18:21:57.812122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.812174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:23.581 [2024-12-13 18:21:57.812187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.313 ms 00:27:23.581 [2024-12-13 18:21:57.812195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.812283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.812300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:23.581 [2024-12-13 18:21:57.812317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:23.581 [2024-12-13 18:21:57.812325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.813145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.581 [2024-12-13 18:21:57.813185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:23.581 [2024-12-13 18:21:57.813198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:27:23.581 [2024-12-13 18:21:57.813206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.581 [2024-12-13 18:21:57.813239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.582 [2024-12-13 18:21:57.813266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:23.582 [2024-12-13 18:21:57.813275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:23.582 [2024-12-13 18:21:57.813283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.582 [2024-12-13 18:21:57.813320] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:23.582 [2024-12-13 18:21:57.813332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.582 [2024-12-13 18:21:57.813341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:23.582 [2024-12-13 18:21:57.813354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:23.582 [2024-12-13 18:21:57.813362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.582 [2024-12-13 18:21:57.818387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.582 [2024-12-13 18:21:57.818433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:23.582 [2024-12-13 18:21:57.818445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.007 ms 00:27:23.582 [2024-12-13 18:21:57.818454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.582 [2024-12-13 18:21:57.818533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.582 [2024-12-13 18:21:57.818543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:23.582 [2024-12-13 18:21:57.818552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:23.582 [2024-12-13 18:21:57.818572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.582 [2024-12-13 18:21:57.819683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.757 ms, result 0 00:27:24.968  [2024-12-13T18:22:00.290Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-13T18:22:01.234Z] Copying: 37/1024 [MB] (15 MBps) [2024-12-13T18:22:02.178Z] Copying: 57/1024 [MB] (20 MBps) [2024-12-13T18:22:03.121Z] Copying: 77/1024 [MB] (19 MBps) [2024-12-13T18:22:04.066Z] Copying: 99/1024 [MB] (22 MBps) [2024-12-13T18:22:05.010Z] Copying: 111/1024 [MB] (12 MBps) [2024-12-13T18:22:06.399Z] Copying: 124/1024 [MB] (13 MBps) [2024-12-13T18:22:07.344Z] Copying: 136/1024 [MB] (11 MBps) [2024-12-13T18:22:08.287Z] Copying: 156/1024 [MB] (20 MBps) [2024-12-13T18:22:09.232Z] Copying: 173/1024 [MB] (17 MBps) [2024-12-13T18:22:10.177Z] Copying: 189/1024 [MB] (15 MBps) [2024-12-13T18:22:11.125Z] Copying: 206/1024 [MB] (16 MBps) [2024-12-13T18:22:12.069Z] Copying: 224/1024 [MB] (18 MBps) [2024-12-13T18:22:13.015Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-13T18:22:14.442Z] Copying: 254/1024 [MB] (19 MBps) [2024-12-13T18:22:15.016Z] Copying: 264/1024 [MB] (10 MBps) [2024-12-13T18:22:16.404Z] Copying: 275/1024 [MB] (10 MBps) [2024-12-13T18:22:17.349Z] Copying: 287/1024 [MB] (11 MBps) [2024-12-13T18:22:18.294Z] Copying: 304/1024 [MB] (17 MBps) [2024-12-13T18:22:19.238Z] Copying: 323/1024 [MB] (18 MBps) [2024-12-13T18:22:20.182Z] Copying: 345/1024 [MB] (22 MBps) [2024-12-13T18:22:21.124Z] Copying: 359/1024 [MB] (13 MBps) [2024-12-13T18:22:22.067Z] Copying: 379/1024 [MB] (20 MBps) [2024-12-13T18:22:23.012Z] Copying: 399/1024 [MB] (19 MBps) [2024-12-13T18:22:24.400Z] Copying: 416/1024 [MB] (16 MBps) [2024-12-13T18:22:25.342Z] Copying: 435/1024 [MB] (19 MBps) [2024-12-13T18:22:26.285Z] Copying: 457/1024 [MB] (21 MBps) [2024-12-13T18:22:27.229Z] Copying: 477/1024 [MB] (19 MBps) [2024-12-13T18:22:28.174Z] Copying: 499/1024 [MB] (22 MBps) [2024-12-13T18:22:29.118Z] Copying: 523/1024 [MB] (24 MBps) [2024-12-13T18:22:30.063Z] Copying: 546/1024 [MB] (22 MBps) [2024-12-13T18:22:31.002Z] Copying: 567/1024 [MB] (20 MBps) [2024-12-13T18:22:32.387Z] Copying: 584/1024 [MB] (17 MBps) [2024-12-13T18:22:33.332Z] Copying: 602/1024 [MB] (18 MBps) [2024-12-13T18:22:34.037Z] Copying: 616/1024 [MB] (13 MBps) [2024-12-13T18:22:35.424Z] Copying: 634/1024 [MB] (18 MBps) [2024-12-13T18:22:36.366Z] Copying: 654/1024 [MB] (19 MBps) [2024-12-13T18:22:37.311Z] Copying: 673/1024 [MB] (19 MBps) [2024-12-13T18:22:38.255Z] Copying: 693/1024 [MB] (19 MBps) [2024-12-13T18:22:39.199Z] Copying: 713/1024 [MB] (20 MBps) [2024-12-13T18:22:40.143Z] Copying: 732/1024 [MB] (18 MBps) [2024-12-13T18:22:41.087Z] Copying: 750/1024 [MB] (18 MBps) [2024-12-13T18:22:42.031Z] Copying: 770/1024 [MB] (19 MBps) [2024-12-13T18:22:43.418Z] Copying: 785/1024 [MB] (15 MBps) [2024-12-13T18:22:44.361Z] Copying: 803/1024 [MB] (17 MBps) [2024-12-13T18:22:45.310Z] Copying: 825/1024 [MB] (21 MBps) [2024-12-13T18:22:46.256Z] Copying: 849/1024 [MB] (23 MBps) [2024-12-13T18:22:47.199Z] Copying: 867/1024 [MB] (18 MBps) [2024-12-13T18:22:48.143Z] Copying: 879/1024 [MB] (12 MBps) [2024-12-13T18:22:49.086Z] Copying: 890/1024 [MB] (10 MBps) [2024-12-13T18:22:50.029Z] Copying: 910/1024 [MB] (20 MBps) [2024-12-13T18:22:51.414Z] Copying: 926/1024 [MB] (16 MBps) [2024-12-13T18:22:52.356Z] Copying: 939/1024 [MB] (13 MBps) [2024-12-13T18:22:53.298Z] Copying: 959/1024 [MB] (19 MBps) [2024-12-13T18:22:54.242Z] Copying: 976/1024 [MB] (16 MBps) [2024-12-13T18:22:55.185Z] Copying: 994/1024 [MB] (18 MBps) [2024-12-13T18:22:55.755Z] Copying: 1012/1024 [MB] (17 MBps) [2024-12-13T18:22:56.018Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-13 18:22:55.949667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.949750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:21.641 [2024-12-13 18:22:55.949767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:21.641 [2024-12-13 18:22:55.949785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.949811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:21.641 [2024-12-13 18:22:55.950623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.950653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:21.641 [2024-12-13 18:22:55.950665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:28:21.641 [2024-12-13 18:22:55.950675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.950926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.950945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:21.641 [2024-12-13 18:22:55.950956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:28:21.641 [2024-12-13 18:22:55.950970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.955286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.955311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:21.641 [2024-12-13 18:22:55.955321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:28:21.641 [2024-12-13 18:22:55.955329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.962150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.962192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:21.641 [2024-12-13 18:22:55.962202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.800 ms 00:28:21.641 [2024-12-13 18:22:55.962210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.965342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.965392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:21.641 [2024-12-13 18:22:55.965403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:28:21.641 [2024-12-13 18:22:55.965411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.970713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.971189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:21.641 [2024-12-13 18:22:55.971208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.255 ms 00:28:21.641 [2024-12-13 18:22:55.971218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.973910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.973973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:21.641 [2024-12-13 18:22:55.973988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.305 ms 00:28:21.641 [2024-12-13 18:22:55.974007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.977032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.977238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:21.641 [2024-12-13 18:22:55.977271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:28:21.641 [2024-12-13 18:22:55.977280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.979421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.979466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:21.641 [2024-12-13 18:22:55.979477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:28:21.641 [2024-12-13 18:22:55.979485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.981304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.981343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:21.641 [2024-12-13 18:22:55.981353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:28:21.641 [2024-12-13 18:22:55.981361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.982926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.641 [2024-12-13 18:22:55.982973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:21.641 [2024-12-13 18:22:55.982983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:28:21.641 [2024-12-13 18:22:55.982990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.641 [2024-12-13 18:22:55.983029] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:21.641 [2024-12-13 18:22:55.983044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:21.641 [2024-12-13 18:22:55.983055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:21.641 [2024-12-13 18:22:55.983065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:21.641 [2024-12-13 18:22:55.983073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:21.641 [2024-12-13 18:22:55.983081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:21.641 [2024-12-13 18:22:55.983090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:21.641 [2024-12-13 18:22:55.983098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.983997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.984004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.984012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:21.642 [2024-12-13 18:22:55.984019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:21.643 [2024-12-13 18:22:55.984026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:21.643 [2024-12-13 18:22:55.984033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:21.643 [2024-12-13 18:22:55.984041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:21.643 [2024-12-13 18:22:55.984050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:21.643 [2024-12-13 18:22:55.984057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:21.643 [2024-12-13 18:22:55.984065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:21.643 [2024-12-13 18:22:55.984083] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:21.643 [2024-12-13 18:22:55.984102] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7551228f-b775-4f19-b32c-c5319322e129 00:28:21.643 [2024-12-13 18:22:55.984111] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:21.643 [2024-12-13 18:22:55.984119] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:21.643 [2024-12-13 18:22:55.984127] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:21.643 [2024-12-13 18:22:55.984136] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:21.643 [2024-12-13 18:22:55.984144] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:21.643 [2024-12-13 18:22:55.984152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:21.643 [2024-12-13 18:22:55.984164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:21.643 [2024-12-13 18:22:55.984171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:21.643 [2024-12-13 18:22:55.984178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:21.643 [2024-12-13 18:22:55.984186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.643 [2024-12-13 18:22:55.984200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:21.643 [2024-12-13 18:22:55.984208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:28:21.643 [2024-12-13 18:22:55.984216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.643 [2024-12-13 18:22:55.986595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.643 [2024-12-13 18:22:55.986619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:21.643 [2024-12-13 18:22:55.986638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:28:21.643 [2024-12-13 18:22:55.986646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.643 [2024-12-13 18:22:55.986776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.643 [2024-12-13 18:22:55.986792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:21.643 [2024-12-13 18:22:55.986801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:21.643 [2024-12-13 18:22:55.986812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.643 [2024-12-13 18:22:55.994128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.643 [2024-12-13 18:22:55.994177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.643 [2024-12-13 18:22:55.994193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.643 [2024-12-13 18:22:55.994204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.643 [2024-12-13 18:22:55.994301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.643 [2024-12-13 18:22:55.994311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.643 [2024-12-13 18:22:55.994319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.643 [2024-12-13 18:22:55.994327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.643 [2024-12-13 18:22:55.994403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.643 [2024-12-13 18:22:55.994414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.643 [2024-12-13 18:22:55.994422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.643 [2024-12-13 18:22:55.994435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.643 [2024-12-13 18:22:55.994455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.643 [2024-12-13 18:22:55.994463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.643 [2024-12-13 18:22:55.994470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.643 [2024-12-13 18:22:55.994478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.643 [2024-12-13 18:22:56.008093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.643 [2024-12-13 18:22:56.008309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.643 [2024-12-13 18:22:56.008329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.643 [2024-12-13 18:22:56.008347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.019369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.905 [2024-12-13 18:22:56.019562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.905 [2024-12-13 18:22:56.019580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.905 [2024-12-13 18:22:56.019589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.019658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.905 [2024-12-13 18:22:56.019668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:21.905 [2024-12-13 18:22:56.019678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.905 [2024-12-13 18:22:56.019692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.019730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.905 [2024-12-13 18:22:56.019744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:21.905 [2024-12-13 18:22:56.019752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.905 [2024-12-13 18:22:56.019760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.019837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.905 [2024-12-13 18:22:56.019847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:21.905 [2024-12-13 18:22:56.019856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.905 [2024-12-13 18:22:56.019864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.019901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.905 [2024-12-13 18:22:56.019915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:21.905 [2024-12-13 18:22:56.019925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.905 [2024-12-13 18:22:56.019933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.019976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.905 [2024-12-13 18:22:56.019986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:21.905 [2024-12-13 18:22:56.019999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.905 [2024-12-13 18:22:56.020011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.020061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.905 [2024-12-13 18:22:56.020075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:21.905 [2024-12-13 18:22:56.020090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.905 [2024-12-13 18:22:56.020106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-13 18:22:56.020275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.542 ms, result 0 00:28:22.166 00:28:22.166 00:28:22.166 18:22:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:24.799 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:24.799 Process with pid 92642 is not found 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92642 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92642 ']' 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 92642 00:28:24.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92642) - No such process 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 92642 is not found' 00:28:24.799 18:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:24.799 Remove shared memory files 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:24.799 ************************************ 00:28:24.799 END TEST ftl_dirty_shutdown 00:28:24.799 ************************************ 00:28:24.799 00:28:24.799 real 4m1.206s 00:28:24.799 user 4m28.242s 00:28:24.799 sys 0m27.487s 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.799 18:22:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.061 18:22:59 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:25.061 18:22:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:25.061 18:22:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.061 18:22:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:25.061 ************************************ 00:28:25.061 START TEST ftl_upgrade_shutdown 00:28:25.061 ************************************ 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:25.061 * Looking for test storage... 00:28:25.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.061 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:25.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.062 --rc genhtml_branch_coverage=1 00:28:25.062 --rc genhtml_function_coverage=1 00:28:25.062 --rc genhtml_legend=1 00:28:25.062 --rc geninfo_all_blocks=1 00:28:25.062 --rc geninfo_unexecuted_blocks=1 00:28:25.062 00:28:25.062 ' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:25.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.062 --rc genhtml_branch_coverage=1 00:28:25.062 --rc genhtml_function_coverage=1 00:28:25.062 --rc genhtml_legend=1 00:28:25.062 --rc geninfo_all_blocks=1 00:28:25.062 --rc geninfo_unexecuted_blocks=1 00:28:25.062 00:28:25.062 ' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:25.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.062 --rc genhtml_branch_coverage=1 00:28:25.062 --rc genhtml_function_coverage=1 00:28:25.062 --rc genhtml_legend=1 00:28:25.062 --rc geninfo_all_blocks=1 00:28:25.062 --rc geninfo_unexecuted_blocks=1 00:28:25.062 00:28:25.062 ' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:25.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.062 --rc genhtml_branch_coverage=1 00:28:25.062 --rc genhtml_function_coverage=1 00:28:25.062 --rc genhtml_legend=1 00:28:25.062 --rc geninfo_all_blocks=1 00:28:25.062 --rc geninfo_unexecuted_blocks=1 00:28:25.062 00:28:25.062 ' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95258 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95258 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95258 ']' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:25.062 18:22:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.324 [2024-12-13 18:22:59.470901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:25.324 [2024-12-13 18:22:59.471137] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95258 ] 00:28:25.324 [2024-12-13 18:22:59.625231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.324 [2024-12-13 18:22:59.667168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:25.897 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:25.898 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:26.470 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:26.470 { 00:28:26.470 "name": "basen1", 00:28:26.470 "aliases": [ 00:28:26.470 "f083d42c-2f28-4a69-8c15-627498ab2844" 00:28:26.470 ], 00:28:26.470 "product_name": "NVMe disk", 00:28:26.470 "block_size": 4096, 00:28:26.470 "num_blocks": 1310720, 00:28:26.470 "uuid": "f083d42c-2f28-4a69-8c15-627498ab2844", 00:28:26.470 "numa_id": -1, 00:28:26.470 "assigned_rate_limits": { 00:28:26.470 "rw_ios_per_sec": 0, 00:28:26.470 "rw_mbytes_per_sec": 0, 00:28:26.470 "r_mbytes_per_sec": 0, 00:28:26.470 "w_mbytes_per_sec": 0 00:28:26.470 }, 00:28:26.470 "claimed": true, 00:28:26.470 "claim_type": "read_many_write_one", 00:28:26.470 "zoned": false, 00:28:26.470 "supported_io_types": { 00:28:26.470 "read": true, 00:28:26.470 "write": true, 00:28:26.470 "unmap": true, 00:28:26.470 "flush": true, 00:28:26.470 "reset": true, 00:28:26.470 "nvme_admin": true, 00:28:26.470 "nvme_io": true, 00:28:26.470 "nvme_io_md": false, 00:28:26.470 "write_zeroes": true, 00:28:26.470 "zcopy": false, 00:28:26.470 "get_zone_info": false, 00:28:26.470 "zone_management": false, 00:28:26.470 "zone_append": false, 00:28:26.470 "compare": true, 00:28:26.470 "compare_and_write": false, 00:28:26.470 "abort": true, 00:28:26.470 "seek_hole": false, 00:28:26.470 "seek_data": false, 00:28:26.470 "copy": true, 00:28:26.470 "nvme_iov_md": false 00:28:26.470 }, 00:28:26.470 "driver_specific": { 00:28:26.470 "nvme": [ 00:28:26.470 { 00:28:26.470 "pci_address": "0000:00:11.0", 00:28:26.470 "trid": { 00:28:26.471 "trtype": "PCIe", 00:28:26.471 "traddr": "0000:00:11.0" 00:28:26.471 }, 00:28:26.471 "ctrlr_data": { 00:28:26.471 "cntlid": 0, 00:28:26.471 "vendor_id": "0x1b36", 00:28:26.471 "model_number": "QEMU NVMe Ctrl", 00:28:26.471 "serial_number": "12341", 00:28:26.471 "firmware_revision": "8.0.0", 00:28:26.471 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:26.471 "oacs": { 00:28:26.471 "security": 0, 00:28:26.471 "format": 1, 00:28:26.471 "firmware": 0, 00:28:26.471 "ns_manage": 1 00:28:26.471 }, 00:28:26.471 "multi_ctrlr": false, 00:28:26.471 "ana_reporting": false 00:28:26.471 }, 00:28:26.471 "vs": { 00:28:26.471 "nvme_version": "1.4" 00:28:26.471 }, 00:28:26.471 "ns_data": { 00:28:26.471 "id": 1, 00:28:26.471 "can_share": false 00:28:26.471 } 00:28:26.471 } 00:28:26.471 ], 00:28:26.471 "mp_policy": "active_passive" 00:28:26.471 } 00:28:26.471 } 00:28:26.471 ]' 00:28:26.471 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:26.471 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:26.471 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:26.731 18:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:26.731 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a7d8ea1c-8ab8-4bef-8e96-8ca0e2864b09 00:28:26.731 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:26.731 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7d8ea1c-8ab8-4bef-8e96-8ca0e2864b09 00:28:26.993 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:27.255 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=4568ae64-c0da-47a5-81d0-f38616632a8c 00:28:27.255 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 4568ae64-c0da-47a5-81d0-f38616632a8c 00:28:27.516 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4ec3d54b-f81b-4975-9234-ecba330bb33f 00:28:27.516 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4ec3d54b-f81b-4975-9234-ecba330bb33f ]] 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4ec3d54b-f81b-4975-9234-ecba330bb33f 5120 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4ec3d54b-f81b-4975-9234-ecba330bb33f 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4ec3d54b-f81b-4975-9234-ecba330bb33f 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4ec3d54b-f81b-4975-9234-ecba330bb33f 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:27.517 18:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ec3d54b-f81b-4975-9234-ecba330bb33f 00:28:27.775 18:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:27.775 { 00:28:27.775 "name": "4ec3d54b-f81b-4975-9234-ecba330bb33f", 00:28:27.775 "aliases": [ 00:28:27.775 "lvs/basen1p0" 00:28:27.775 ], 00:28:27.775 "product_name": "Logical Volume", 00:28:27.775 "block_size": 4096, 00:28:27.775 "num_blocks": 5242880, 00:28:27.775 "uuid": "4ec3d54b-f81b-4975-9234-ecba330bb33f", 00:28:27.775 "assigned_rate_limits": { 00:28:27.775 "rw_ios_per_sec": 0, 00:28:27.775 "rw_mbytes_per_sec": 0, 00:28:27.775 "r_mbytes_per_sec": 0, 00:28:27.775 "w_mbytes_per_sec": 0 00:28:27.775 }, 00:28:27.775 "claimed": false, 00:28:27.775 "zoned": false, 00:28:27.775 "supported_io_types": { 00:28:27.775 "read": true, 00:28:27.775 "write": true, 00:28:27.775 "unmap": true, 00:28:27.775 "flush": false, 00:28:27.775 "reset": true, 00:28:27.775 "nvme_admin": false, 00:28:27.775 "nvme_io": false, 00:28:27.775 "nvme_io_md": false, 00:28:27.775 "write_zeroes": true, 00:28:27.775 "zcopy": false, 00:28:27.775 "get_zone_info": false, 00:28:27.775 "zone_management": false, 00:28:27.775 "zone_append": false, 00:28:27.775 "compare": false, 00:28:27.775 "compare_and_write": false, 00:28:27.775 "abort": false, 00:28:27.775 "seek_hole": true, 00:28:27.775 "seek_data": true, 00:28:27.775 "copy": false, 00:28:27.775 "nvme_iov_md": false 00:28:27.775 }, 00:28:27.775 "driver_specific": { 00:28:27.776 "lvol": { 00:28:27.776 "lvol_store_uuid": "4568ae64-c0da-47a5-81d0-f38616632a8c", 00:28:27.776 "base_bdev": "basen1", 00:28:27.776 "thin_provision": true, 00:28:27.776 "num_allocated_clusters": 0, 00:28:27.776 "snapshot": false, 00:28:27.776 "clone": false, 00:28:27.776 "esnap_clone": false 00:28:27.776 } 00:28:27.776 } 00:28:27.776 } 00:28:27.776 ]' 00:28:27.776 18:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:27.776 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:28.034 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:28.034 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:28.034 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:28.293 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:28.293 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:28.293 18:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4ec3d54b-f81b-4975-9234-ecba330bb33f -c cachen1p0 --l2p_dram_limit 2 00:28:28.555 [2024-12-13 18:23:02.684762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.555 [2024-12-13 18:23:02.684815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:28.555 [2024-12-13 18:23:02.684827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:28.555 [2024-12-13 18:23:02.684836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.555 [2024-12-13 18:23:02.684876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.555 [2024-12-13 18:23:02.684885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:28.555 [2024-12-13 18:23:02.684893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:28.555 [2024-12-13 18:23:02.684903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.555 [2024-12-13 18:23:02.684918] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:28.555 [2024-12-13 18:23:02.685131] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:28.555 [2024-12-13 18:23:02.685144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.555 [2024-12-13 18:23:02.685153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:28.555 [2024-12-13 18:23:02.685160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:28:28.555 [2024-12-13 18:23:02.685168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.555 [2024-12-13 18:23:02.685271] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d3a22197-3b2b-4bc7-ae5a-99d140cd7a75 00:28:28.555 [2024-12-13 18:23:02.686616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.686640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:28.556 [2024-12-13 18:23:02.686654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:28.556 [2024-12-13 18:23:02.686661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.693630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.693716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:28.556 [2024-12-13 18:23:02.693728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.933 ms 00:28:28.556 [2024-12-13 18:23:02.693734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.693776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.693783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:28.556 [2024-12-13 18:23:02.693792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:28.556 [2024-12-13 18:23:02.693798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.693846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.693854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:28.556 [2024-12-13 18:23:02.693862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:28.556 [2024-12-13 18:23:02.693869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.693888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:28.556 [2024-12-13 18:23:02.695592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.695622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:28.556 [2024-12-13 18:23:02.695630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.710 ms 00:28:28.556 [2024-12-13 18:23:02.695638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.695661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.695670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:28.556 [2024-12-13 18:23:02.695677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:28.556 [2024-12-13 18:23:02.695686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.695700] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:28.556 [2024-12-13 18:23:02.695828] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:28.556 [2024-12-13 18:23:02.695842] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:28.556 [2024-12-13 18:23:02.695853] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:28.556 [2024-12-13 18:23:02.695862] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:28.556 [2024-12-13 18:23:02.695878] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:28.556 [2024-12-13 18:23:02.695885] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:28.556 [2024-12-13 18:23:02.695895] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:28.556 [2024-12-13 18:23:02.695904] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:28.556 [2024-12-13 18:23:02.695911] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:28.556 [2024-12-13 18:23:02.695918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.695926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:28.556 [2024-12-13 18:23:02.695933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:28:28.556 [2024-12-13 18:23:02.695941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.696010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.556 [2024-12-13 18:23:02.696022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:28.556 [2024-12-13 18:23:02.696029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:28:28.556 [2024-12-13 18:23:02.696046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.556 [2024-12-13 18:23:02.696126] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:28.556 [2024-12-13 18:23:02.696141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:28.556 [2024-12-13 18:23:02.696148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:28.556 [2024-12-13 18:23:02.696169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:28.556 [2024-12-13 18:23:02.696181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:28.556 [2024-12-13 18:23:02.696187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:28.556 [2024-12-13 18:23:02.696193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:28.556 [2024-12-13 18:23:02.696207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:28.556 [2024-12-13 18:23:02.696213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:28.556 [2024-12-13 18:23:02.696227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:28.556 [2024-12-13 18:23:02.696234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:28.556 [2024-12-13 18:23:02.696260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:28.556 [2024-12-13 18:23:02.696267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:28.556 [2024-12-13 18:23:02.696281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:28.556 [2024-12-13 18:23:02.696290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:28.556 [2024-12-13 18:23:02.696306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:28.556 [2024-12-13 18:23:02.696312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:28.556 [2024-12-13 18:23:02.696327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:28.556 [2024-12-13 18:23:02.696336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:28.556 [2024-12-13 18:23:02.696352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:28.556 [2024-12-13 18:23:02.696359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:28.556 [2024-12-13 18:23:02.696373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:28.556 [2024-12-13 18:23:02.696381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:28.556 [2024-12-13 18:23:02.696395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:28.556 [2024-12-13 18:23:02.696417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:28.556 [2024-12-13 18:23:02.696438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:28.556 [2024-12-13 18:23:02.696450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696457] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:28.556 [2024-12-13 18:23:02.696465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:28.556 [2024-12-13 18:23:02.696478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.556 [2024-12-13 18:23:02.696494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:28.556 [2024-12-13 18:23:02.696505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:28.556 [2024-12-13 18:23:02.696512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:28.556 [2024-12-13 18:23:02.696518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:28.556 [2024-12-13 18:23:02.696525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:28.556 [2024-12-13 18:23:02.696531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:28.556 [2024-12-13 18:23:02.696542] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:28.556 [2024-12-13 18:23:02.696553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.556 [2024-12-13 18:23:02.696562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:28.556 [2024-12-13 18:23:02.696570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:28.556 [2024-12-13 18:23:02.696578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:28.556 [2024-12-13 18:23:02.696591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:28.556 [2024-12-13 18:23:02.696601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:28.557 [2024-12-13 18:23:02.696608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:28.557 [2024-12-13 18:23:02.696619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:28.557 [2024-12-13 18:23:02.696630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:28.557 [2024-12-13 18:23:02.696685] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:28.557 [2024-12-13 18:23:02.696692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:28.557 [2024-12-13 18:23:02.696707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:28.557 [2024-12-13 18:23:02.696714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:28.557 [2024-12-13 18:23:02.696720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:28.557 [2024-12-13 18:23:02.696728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.557 [2024-12-13 18:23:02.696734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:28.557 [2024-12-13 18:23:02.696743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.652 ms 00:28:28.557 [2024-12-13 18:23:02.696749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.557 [2024-12-13 18:23:02.696780] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:28.557 [2024-12-13 18:23:02.696788] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:32.763 [2024-12-13 18:23:06.476574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.476696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:32.763 [2024-12-13 18:23:06.476741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3779.760 ms 00:28:32.763 [2024-12-13 18:23:06.476766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.490524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.490567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:32.763 [2024-12-13 18:23:06.490582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.581 ms 00:28:32.763 [2024-12-13 18:23:06.490597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.490645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.490659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:32.763 [2024-12-13 18:23:06.490669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:32.763 [2024-12-13 18:23:06.490678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.501535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.501737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:32.763 [2024-12-13 18:23:06.501759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.807 ms 00:28:32.763 [2024-12-13 18:23:06.501772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.501803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.501816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:32.763 [2024-12-13 18:23:06.501827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:32.763 [2024-12-13 18:23:06.501835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.502296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.502320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:32.763 [2024-12-13 18:23:06.502333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.410 ms 00:28:32.763 [2024-12-13 18:23:06.502343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.502391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.502402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:32.763 [2024-12-13 18:23:06.502414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:32.763 [2024-12-13 18:23:06.502423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.509480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.509513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:32.763 [2024-12-13 18:23:06.509525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.034 ms 00:28:32.763 [2024-12-13 18:23:06.509533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.538328] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:32.763 [2024-12-13 18:23:06.539435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.539473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:32.763 [2024-12-13 18:23:06.539488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.845 ms 00:28:32.763 [2024-12-13 18:23:06.539499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.554694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.554810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:32.763 [2024-12-13 18:23:06.554856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.150 ms 00:28:32.763 [2024-12-13 18:23:06.554890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.555110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.555149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:32.763 [2024-12-13 18:23:06.555174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.119 ms 00:28:32.763 [2024-12-13 18:23:06.555201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.559296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.559333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:32.763 [2024-12-13 18:23:06.559347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.007 ms 00:28:32.763 [2024-12-13 18:23:06.559362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.562058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.562238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:32.763 [2024-12-13 18:23:06.562267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.662 ms 00:28:32.763 [2024-12-13 18:23:06.562278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.562810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.562849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:32.763 [2024-12-13 18:23:06.562862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:28:32.763 [2024-12-13 18:23:06.562875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.592970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.593013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:32.763 [2024-12-13 18:23:06.593028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.071 ms 00:28:32.763 [2024-12-13 18:23:06.593039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.597352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.599538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:32.763 [2024-12-13 18:23:06.599559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.270 ms 00:28:32.763 [2024-12-13 18:23:06.599571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.603055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.603189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:32.763 [2024-12-13 18:23:06.603204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.449 ms 00:28:32.763 [2024-12-13 18:23:06.603214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.607522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.607557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:32.763 [2024-12-13 18:23:06.607567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.261 ms 00:28:32.763 [2024-12-13 18:23:06.607578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.607622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.607637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:32.763 [2024-12-13 18:23:06.607647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:32.763 [2024-12-13 18:23:06.607656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.607719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.763 [2024-12-13 18:23:06.607734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:32.763 [2024-12-13 18:23:06.607743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:32.763 [2024-12-13 18:23:06.607755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.763 [2024-12-13 18:23:06.608720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3923.512 ms, result 0 00:28:32.763 { 00:28:32.763 "name": "ftl", 00:28:32.763 "uuid": "d3a22197-3b2b-4bc7-ae5a-99d140cd7a75" 00:28:32.763 } 00:28:32.763 18:23:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:32.763 [2024-12-13 18:23:06.816430] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.763 18:23:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:32.763 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:33.083 [2024-12-13 18:23:07.216882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:33.083 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:33.083 [2024-12-13 18:23:07.421328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:33.360 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:33.620 Fill FTL, iteration 1 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95386 00:28:33.620 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95386 /var/tmp/spdk.tgt.sock 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95386 ']' 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:33.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.621 18:23:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.621 [2024-12-13 18:23:07.846234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:33.621 [2024-12-13 18:23:07.846480] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95386 ] 00:28:33.621 [2024-12-13 18:23:07.992138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.881 [2024-12-13 18:23:08.010657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.454 18:23:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.454 18:23:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:34.454 18:23:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:34.711 ftln1 00:28:34.711 18:23:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:34.711 18:23:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95386 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95386 ']' 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95386 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95386 00:28:34.970 killing process with pid 95386 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95386' 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95386 00:28:34.970 18:23:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95386 00:28:35.228 18:23:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:35.228 18:23:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:35.228 [2024-12-13 18:23:09.480494] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:35.228 [2024-12-13 18:23:09.480802] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95413 ] 00:28:35.486 [2024-12-13 18:23:09.624520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.486 [2024-12-13 18:23:09.642865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.867  [2024-12-13T18:23:12.186Z] Copying: 197/1024 [MB] (197 MBps) [2024-12-13T18:23:13.129Z] Copying: 394/1024 [MB] (197 MBps) [2024-12-13T18:23:14.071Z] Copying: 645/1024 [MB] (251 MBps) [2024-12-13T18:23:14.642Z] Copying: 887/1024 [MB] (242 MBps) [2024-12-13T18:23:14.642Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:28:40.265 00:28:40.265 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:40.265 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:40.265 Calculate MD5 checksum, iteration 1 00:28:40.265 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:40.265 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:40.265 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:40.265 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:40.265 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:40.266 18:23:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:40.266 [2024-12-13 18:23:14.586378] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:40.266 [2024-12-13 18:23:14.586670] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95466 ] 00:28:40.524 [2024-12-13 18:23:14.727057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.524 [2024-12-13 18:23:14.744069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.907  [2024-12-13T18:23:16.545Z] Copying: 653/1024 [MB] (653 MBps) [2024-12-13T18:23:16.806Z] Copying: 1024/1024 [MB] (average 641 MBps) 00:28:42.429 00:28:42.429 18:23:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:42.429 18:23:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:44.330 Fill FTL, iteration 2 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=af1ef32d9cbb0ff80bf1d76b11cdcefe 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:44.330 18:23:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:44.588 [2024-12-13 18:23:18.740202] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:44.588 [2024-12-13 18:23:18.740440] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95517 ] 00:28:44.588 [2024-12-13 18:23:18.879772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.588 [2024-12-13 18:23:18.896909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.962  [2024-12-13T18:23:21.273Z] Copying: 200/1024 [MB] (200 MBps) [2024-12-13T18:23:22.213Z] Copying: 436/1024 [MB] (236 MBps) [2024-12-13T18:23:23.154Z] Copying: 683/1024 [MB] (247 MBps) [2024-12-13T18:23:23.725Z] Copying: 925/1024 [MB] (242 MBps) [2024-12-13T18:23:23.725Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:28:49.348 00:28:49.348 Calculate MD5 checksum, iteration 2 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:49.348 18:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:49.348 [2024-12-13 18:23:23.704937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:49.348 [2024-12-13 18:23:23.705351] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95577 ] 00:28:49.609 [2024-12-13 18:23:23.846874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.609 [2024-12-13 18:23:23.865429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.992  [2024-12-13T18:23:25.938Z] Copying: 607/1024 [MB] (607 MBps) [2024-12-13T18:23:27.321Z] Copying: 1024/1024 [MB] (average 603 MBps) 00:28:52.944 00:28:52.944 18:23:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:52.944 18:23:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:54.844 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:54.845 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=26729d8897316cf600a7d0544adb7619 00:28:54.845 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:54.845 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:54.845 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:55.103 [2024-12-13 18:23:29.320860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.103 [2024-12-13 18:23:29.320914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:55.103 [2024-12-13 18:23:29.320931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:55.103 [2024-12-13 18:23:29.320941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.103 [2024-12-13 18:23:29.320959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.103 [2024-12-13 18:23:29.320967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:55.103 [2024-12-13 18:23:29.320974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:55.103 [2024-12-13 18:23:29.320981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.103 [2024-12-13 18:23:29.320996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.103 [2024-12-13 18:23:29.321004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:55.103 [2024-12-13 18:23:29.321010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:55.103 [2024-12-13 18:23:29.321019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.103 [2024-12-13 18:23:29.321072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:28:55.103 true 00:28:55.103 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:55.361 { 00:28:55.361 "name": "ftl", 00:28:55.361 "properties": [ 00:28:55.361 { 00:28:55.361 "name": "superblock_version", 00:28:55.361 "value": 5, 00:28:55.361 "read-only": true 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "name": "base_device", 00:28:55.361 "bands": [ 00:28:55.361 { 00:28:55.361 "id": 0, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 1, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 2, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 3, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 4, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 5, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 6, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 7, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 8, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 9, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 10, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 11, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 12, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 13, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 14, 00:28:55.361 "state": "FREE", 00:28:55.361 "validity": 0.0 00:28:55.361 }, 00:28:55.361 { 00:28:55.361 "id": 15, 00:28:55.362 "state": "FREE", 00:28:55.362 "validity": 0.0 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "id": 16, 00:28:55.362 "state": "FREE", 00:28:55.362 "validity": 0.0 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "id": 17, 00:28:55.362 "state": "FREE", 00:28:55.362 "validity": 0.0 00:28:55.362 } 00:28:55.362 ], 00:28:55.362 "read-only": true 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "name": "cache_device", 00:28:55.362 "type": "bdev", 00:28:55.362 "chunks": [ 00:28:55.362 { 00:28:55.362 "id": 0, 00:28:55.362 "state": "INACTIVE", 00:28:55.362 "utilization": 0.0 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "id": 1, 00:28:55.362 "state": "CLOSED", 00:28:55.362 "utilization": 1.0 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "id": 2, 00:28:55.362 "state": "CLOSED", 00:28:55.362 "utilization": 1.0 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "id": 3, 00:28:55.362 "state": "OPEN", 00:28:55.362 "utilization": 0.001953125 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "id": 4, 00:28:55.362 "state": "OPEN", 00:28:55.362 "utilization": 0.0 00:28:55.362 } 00:28:55.362 ], 00:28:55.362 "read-only": true 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "name": "verbose_mode", 00:28:55.362 "value": true, 00:28:55.362 "unit": "", 00:28:55.362 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:55.362 }, 00:28:55.362 { 00:28:55.362 "name": "prep_upgrade_on_shutdown", 00:28:55.362 "value": false, 00:28:55.362 "unit": "", 00:28:55.362 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:55.362 } 00:28:55.362 ] 00:28:55.362 } 00:28:55.362 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:55.362 [2024-12-13 18:23:29.637080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.362 [2024-12-13 18:23:29.637112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:55.362 [2024-12-13 18:23:29.637120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:55.362 [2024-12-13 18:23:29.637134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.362 [2024-12-13 18:23:29.637151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.362 [2024-12-13 18:23:29.637157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:55.362 [2024-12-13 18:23:29.637163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:55.362 [2024-12-13 18:23:29.637170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.362 [2024-12-13 18:23:29.637184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.362 [2024-12-13 18:23:29.637189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:55.362 [2024-12-13 18:23:29.637195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:55.362 [2024-12-13 18:23:29.637200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.362 [2024-12-13 18:23:29.637262] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.151 ms, result 0 00:28:55.362 true 00:28:55.362 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:55.362 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:55.362 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:55.620 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:55.620 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:55.620 18:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:55.620 [2024-12-13 18:23:29.994710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.620 [2024-12-13 18:23:29.994743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:55.620 [2024-12-13 18:23:29.994751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:55.620 [2024-12-13 18:23:29.994757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.620 [2024-12-13 18:23:29.994774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.620 [2024-12-13 18:23:29.994780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:55.620 [2024-12-13 18:23:29.994787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:55.620 [2024-12-13 18:23:29.994793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.620 [2024-12-13 18:23:29.994807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.620 [2024-12-13 18:23:29.994813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:55.620 [2024-12-13 18:23:29.994819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:55.620 [2024-12-13 18:23:29.994824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.620 [2024-12-13 18:23:29.994865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.145 ms, result 0 00:28:55.934 true 00:28:55.934 18:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:55.934 { 00:28:55.934 "name": "ftl", 00:28:55.934 "properties": [ 00:28:55.934 { 00:28:55.934 "name": "superblock_version", 00:28:55.934 "value": 5, 00:28:55.934 "read-only": true 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "name": "base_device", 00:28:55.934 "bands": [ 00:28:55.934 { 00:28:55.934 "id": 0, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 1, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 2, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 3, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 4, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 5, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 6, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 7, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 8, 00:28:55.934 "state": "FREE", 00:28:55.934 "validity": 0.0 00:28:55.934 }, 00:28:55.934 { 00:28:55.934 "id": 9, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 10, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 11, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 12, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 13, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 14, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 15, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 16, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 17, 00:28:55.935 "state": "FREE", 00:28:55.935 "validity": 0.0 00:28:55.935 } 00:28:55.935 ], 00:28:55.935 "read-only": true 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "name": "cache_device", 00:28:55.935 "type": "bdev", 00:28:55.935 "chunks": [ 00:28:55.935 { 00:28:55.935 "id": 0, 00:28:55.935 "state": "INACTIVE", 00:28:55.935 "utilization": 0.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 1, 00:28:55.935 "state": "CLOSED", 00:28:55.935 "utilization": 1.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 2, 00:28:55.935 "state": "CLOSED", 00:28:55.935 "utilization": 1.0 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 3, 00:28:55.935 "state": "OPEN", 00:28:55.935 "utilization": 0.001953125 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "id": 4, 00:28:55.935 "state": "OPEN", 00:28:55.935 "utilization": 0.0 00:28:55.935 } 00:28:55.935 ], 00:28:55.935 "read-only": true 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "name": "verbose_mode", 00:28:55.935 "value": true, 00:28:55.935 "unit": "", 00:28:55.935 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:55.935 }, 00:28:55.935 { 00:28:55.935 "name": "prep_upgrade_on_shutdown", 00:28:55.935 "value": true, 00:28:55.935 "unit": "", 00:28:55.935 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:55.935 } 00:28:55.935 ] 00:28:55.935 } 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95258 ]] 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95258 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95258 ']' 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95258 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95258 00:28:55.935 killing process with pid 95258 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95258' 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95258 00:28:55.935 18:23:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95258 00:28:56.196 [2024-12-13 18:23:30.346321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:56.196 [2024-12-13 18:23:30.352559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.196 [2024-12-13 18:23:30.352594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:56.196 [2024-12-13 18:23:30.352605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:56.196 [2024-12-13 18:23:30.352611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.196 [2024-12-13 18:23:30.352631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:56.196 [2024-12-13 18:23:30.353148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.196 [2024-12-13 18:23:30.353174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:56.196 [2024-12-13 18:23:30.353182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.505 ms 00:28:56.196 [2024-12-13 18:23:30.353188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.325 [2024-12-13 18:23:38.140944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.325 [2024-12-13 18:23:38.141010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:04.325 [2024-12-13 18:23:38.141024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7787.704 ms 00:29:04.325 [2024-12-13 18:23:38.141033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.325 [2024-12-13 18:23:38.142280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.325 [2024-12-13 18:23:38.142305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:04.326 [2024-12-13 18:23:38.142314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.235 ms 00:29:04.326 [2024-12-13 18:23:38.142321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.143194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.143208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:04.326 [2024-12-13 18:23:38.143216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:29:04.326 [2024-12-13 18:23:38.143222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.145857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.145886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:04.326 [2024-12-13 18:23:38.145895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.589 ms 00:29:04.326 [2024-12-13 18:23:38.145901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.149095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.149270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:04.326 [2024-12-13 18:23:38.149285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.169 ms 00:29:04.326 [2024-12-13 18:23:38.149297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.149352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.149360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:04.326 [2024-12-13 18:23:38.149367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:04.326 [2024-12-13 18:23:38.149374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.150678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.150700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:04.326 [2024-12-13 18:23:38.150708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.291 ms 00:29:04.326 [2024-12-13 18:23:38.150714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.152302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.152397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:04.326 [2024-12-13 18:23:38.152409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.564 ms 00:29:04.326 [2024-12-13 18:23:38.152415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.153572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.153593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:04.326 [2024-12-13 18:23:38.153601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.134 ms 00:29:04.326 [2024-12-13 18:23:38.153606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.155662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.155944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:04.326 [2024-12-13 18:23:38.155956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.904 ms 00:29:04.326 [2024-12-13 18:23:38.155962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.156188] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:04.326 [2024-12-13 18:23:38.156215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:04.326 [2024-12-13 18:23:38.156226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:04.326 [2024-12-13 18:23:38.156232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:04.326 [2024-12-13 18:23:38.156239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:04.326 [2024-12-13 18:23:38.156349] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:04.326 [2024-12-13 18:23:38.156355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d3a22197-3b2b-4bc7-ae5a-99d140cd7a75 00:29:04.326 [2024-12-13 18:23:38.156361] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:04.326 [2024-12-13 18:23:38.156373] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:04.326 [2024-12-13 18:23:38.156379] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:04.326 [2024-12-13 18:23:38.156386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:04.326 [2024-12-13 18:23:38.156392] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:04.326 [2024-12-13 18:23:38.156399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:04.326 [2024-12-13 18:23:38.156404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:04.326 [2024-12-13 18:23:38.156411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:04.326 [2024-12-13 18:23:38.156417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:04.326 [2024-12-13 18:23:38.156426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.156433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:04.326 [2024-12-13 18:23:38.156440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:29:04.326 [2024-12-13 18:23:38.156447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.158155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.158181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:04.326 [2024-12-13 18:23:38.158190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.693 ms 00:29:04.326 [2024-12-13 18:23:38.158198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.326 [2024-12-13 18:23:38.158295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.326 [2024-12-13 18:23:38.158304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:04.326 [2024-12-13 18:23:38.158311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.082 ms 00:29:04.326 [2024-12-13 18:23:38.158317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.164371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.164507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:04.327 [2024-12-13 18:23:38.164553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.164571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.164606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.164624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:04.327 [2024-12-13 18:23:38.164640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.164655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.164730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.164752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:04.327 [2024-12-13 18:23:38.164769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.164920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.164949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.164965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:04.327 [2024-12-13 18:23:38.164986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.165001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.176180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.176331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:04.327 [2024-12-13 18:23:38.176375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.176393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.184937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.185056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:04.327 [2024-12-13 18:23:38.185096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.185114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.185199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.185228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:04.327 [2024-12-13 18:23:38.185254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.185270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.185308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.185361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:04.327 [2024-12-13 18:23:38.185382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.185397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.185466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.185486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:04.327 [2024-12-13 18:23:38.185505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.185520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.185581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.185608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:04.327 [2024-12-13 18:23:38.185624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.185640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.185686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.185704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:04.327 [2024-12-13 18:23:38.185723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.185739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.185792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:04.327 [2024-12-13 18:23:38.185815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:04.327 [2024-12-13 18:23:38.185832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:04.327 [2024-12-13 18:23:38.185846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.327 [2024-12-13 18:23:38.186006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7833.388 ms, result 0 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:10.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95790 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95790 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95790 ']' 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:10.910 18:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:10.910 [2024-12-13 18:23:44.423553] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:10.910 [2024-12-13 18:23:44.423673] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95790 ] 00:29:10.910 [2024-12-13 18:23:44.565680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.910 [2024-12-13 18:23:44.590001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.910 [2024-12-13 18:23:44.884376] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:10.910 [2024-12-13 18:23:44.884432] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:10.910 [2024-12-13 18:23:45.030411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.030445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:10.910 [2024-12-13 18:23:45.030457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:10.910 [2024-12-13 18:23:45.030464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.030506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.030514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:10.910 [2024-12-13 18:23:45.030523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:10.910 [2024-12-13 18:23:45.030529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.030543] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:10.910 [2024-12-13 18:23:45.030720] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:10.910 [2024-12-13 18:23:45.030732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.030738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:10.910 [2024-12-13 18:23:45.030744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:29:10.910 [2024-12-13 18:23:45.030750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.031977] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:10.910 [2024-12-13 18:23:45.034852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.034883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:10.910 [2024-12-13 18:23:45.034891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.877 ms 00:29:10.910 [2024-12-13 18:23:45.034898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.034942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.034950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:10.910 [2024-12-13 18:23:45.034957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:10.910 [2024-12-13 18:23:45.034962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.041106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.041131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:10.910 [2024-12-13 18:23:45.041138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.095 ms 00:29:10.910 [2024-12-13 18:23:45.041143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.041181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.041188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:10.910 [2024-12-13 18:23:45.041194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:10.910 [2024-12-13 18:23:45.041200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.041230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.041254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:10.910 [2024-12-13 18:23:45.041261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:10.910 [2024-12-13 18:23:45.041267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.041285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:10.910 [2024-12-13 18:23:45.042812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.042836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:10.910 [2024-12-13 18:23:45.042843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.533 ms 00:29:10.910 [2024-12-13 18:23:45.042849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.042874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.042883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:10.910 [2024-12-13 18:23:45.042889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:10.910 [2024-12-13 18:23:45.042895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.042911] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:10.910 [2024-12-13 18:23:45.042928] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:10.910 [2024-12-13 18:23:45.042956] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:10.910 [2024-12-13 18:23:45.042968] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:10.910 [2024-12-13 18:23:45.043054] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:10.910 [2024-12-13 18:23:45.043064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:10.910 [2024-12-13 18:23:45.043073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:10.910 [2024-12-13 18:23:45.043086] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:10.910 [2024-12-13 18:23:45.043093] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:10.910 [2024-12-13 18:23:45.043099] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:10.910 [2024-12-13 18:23:45.043108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:10.910 [2024-12-13 18:23:45.043113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:10.910 [2024-12-13 18:23:45.043120] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:10.910 [2024-12-13 18:23:45.043127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.043133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:10.910 [2024-12-13 18:23:45.043141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:29:10.910 [2024-12-13 18:23:45.043150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.043217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.910 [2024-12-13 18:23:45.043227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:10.910 [2024-12-13 18:23:45.043233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:29:10.910 [2024-12-13 18:23:45.043256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.910 [2024-12-13 18:23:45.043337] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:10.910 [2024-12-13 18:23:45.043346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:10.910 [2024-12-13 18:23:45.043353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:10.910 [2024-12-13 18:23:45.043366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.910 [2024-12-13 18:23:45.043372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:10.910 [2024-12-13 18:23:45.043378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:10.911 [2024-12-13 18:23:45.043388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:10.911 [2024-12-13 18:23:45.043394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:10.911 [2024-12-13 18:23:45.043399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:10.911 [2024-12-13 18:23:45.043409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:10.911 [2024-12-13 18:23:45.043413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:10.911 [2024-12-13 18:23:45.043424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:10.911 [2024-12-13 18:23:45.043430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:10.911 [2024-12-13 18:23:45.043444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:10.911 [2024-12-13 18:23:45.043449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:10.911 [2024-12-13 18:23:45.043459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:10.911 [2024-12-13 18:23:45.043465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:10.911 [2024-12-13 18:23:45.043471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:10.911 [2024-12-13 18:23:45.043478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:10.911 [2024-12-13 18:23:45.043484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:10.911 [2024-12-13 18:23:45.043489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:10.911 [2024-12-13 18:23:45.043495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:10.911 [2024-12-13 18:23:45.043501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:10.911 [2024-12-13 18:23:45.043506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:10.911 [2024-12-13 18:23:45.043513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:10.911 [2024-12-13 18:23:45.043519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:10.911 [2024-12-13 18:23:45.043525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:10.911 [2024-12-13 18:23:45.043533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:10.911 [2024-12-13 18:23:45.043539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:10.911 [2024-12-13 18:23:45.043552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:10.911 [2024-12-13 18:23:45.043558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:10.911 [2024-12-13 18:23:45.043570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:10.911 [2024-12-13 18:23:45.043587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:10.911 [2024-12-13 18:23:45.043593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043599] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:10.911 [2024-12-13 18:23:45.043606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:10.911 [2024-12-13 18:23:45.043612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:10.911 [2024-12-13 18:23:45.043618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.911 [2024-12-13 18:23:45.043627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:10.911 [2024-12-13 18:23:45.043634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:10.911 [2024-12-13 18:23:45.043640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:10.911 [2024-12-13 18:23:45.043646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:10.911 [2024-12-13 18:23:45.043652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:10.911 [2024-12-13 18:23:45.043659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:10.911 [2024-12-13 18:23:45.043666] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:10.911 [2024-12-13 18:23:45.043674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:10.911 [2024-12-13 18:23:45.043687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:10.911 [2024-12-13 18:23:45.043707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:10.911 [2024-12-13 18:23:45.043713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:10.911 [2024-12-13 18:23:45.043718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:10.911 [2024-12-13 18:23:45.043724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:10.911 [2024-12-13 18:23:45.043774] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:10.911 [2024-12-13 18:23:45.043784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:10.911 [2024-12-13 18:23:45.043799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:10.911 [2024-12-13 18:23:45.043806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:10.911 [2024-12-13 18:23:45.043817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:10.911 [2024-12-13 18:23:45.043824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.911 [2024-12-13 18:23:45.043832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:10.911 [2024-12-13 18:23:45.043840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:29:10.911 [2024-12-13 18:23:45.043846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.911 [2024-12-13 18:23:45.043884] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:10.911 [2024-12-13 18:23:45.043893] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:15.116 [2024-12-13 18:23:49.087812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.088211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:15.116 [2024-12-13 18:23:49.088258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4043.908 ms 00:29:15.116 [2024-12-13 18:23:49.088282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.107847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.107915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:15.116 [2024-12-13 18:23:49.107932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.410 ms 00:29:15.116 [2024-12-13 18:23:49.107943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.108046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.108058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:15.116 [2024-12-13 18:23:49.108070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:15.116 [2024-12-13 18:23:49.108080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.125973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.126036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:15.116 [2024-12-13 18:23:49.126050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.841 ms 00:29:15.116 [2024-12-13 18:23:49.126069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.126122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.126132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:15.116 [2024-12-13 18:23:49.126142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:15.116 [2024-12-13 18:23:49.126156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.126970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.127010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:15.116 [2024-12-13 18:23:49.127023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:29:15.116 [2024-12-13 18:23:49.127032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.127094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.127104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:15.116 [2024-12-13 18:23:49.127114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:15.116 [2024-12-13 18:23:49.127124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.139361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.139411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:15.116 [2024-12-13 18:23:49.139440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.205 ms 00:29:15.116 [2024-12-13 18:23:49.139452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.157528] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:15.116 [2024-12-13 18:23:49.157612] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:15.116 [2024-12-13 18:23:49.157640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.157655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:15.116 [2024-12-13 18:23:49.157670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.059 ms 00:29:15.116 [2024-12-13 18:23:49.157681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.163494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.163547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:15.116 [2024-12-13 18:23:49.163562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.739 ms 00:29:15.116 [2024-12-13 18:23:49.163572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.166650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.166704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:15.116 [2024-12-13 18:23:49.166716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.009 ms 00:29:15.116 [2024-12-13 18:23:49.166724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.169660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.169712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:15.116 [2024-12-13 18:23:49.169724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.879 ms 00:29:15.116 [2024-12-13 18:23:49.169732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.170120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.170138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:15.116 [2024-12-13 18:23:49.170149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:29:15.116 [2024-12-13 18:23:49.170159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.201394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.201454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:15.116 [2024-12-13 18:23:49.201468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.209 ms 00:29:15.116 [2024-12-13 18:23:49.201477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.209980] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:15.116 [2024-12-13 18:23:49.210954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.211003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:15.116 [2024-12-13 18:23:49.211014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.420 ms 00:29:15.116 [2024-12-13 18:23:49.211023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.211108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.211124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:15.116 [2024-12-13 18:23:49.211133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:15.116 [2024-12-13 18:23:49.211141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.116 [2024-12-13 18:23:49.211193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.116 [2024-12-13 18:23:49.211205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:15.117 [2024-12-13 18:23:49.211220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:15.117 [2024-12-13 18:23:49.211229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.211277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.117 [2024-12-13 18:23:49.211288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:15.117 [2024-12-13 18:23:49.211297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:15.117 [2024-12-13 18:23:49.211306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.211350] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:15.117 [2024-12-13 18:23:49.211362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.117 [2024-12-13 18:23:49.211371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:15.117 [2024-12-13 18:23:49.211380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:15.117 [2024-12-13 18:23:49.211391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.216755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.117 [2024-12-13 18:23:49.216804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:15.117 [2024-12-13 18:23:49.216815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.343 ms 00:29:15.117 [2024-12-13 18:23:49.216825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.216911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.117 [2024-12-13 18:23:49.216923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:15.117 [2024-12-13 18:23:49.216933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:15.117 [2024-12-13 18:23:49.216942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.218363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4187.349 ms, result 0 00:29:15.117 [2024-12-13 18:23:49.231913] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.117 [2024-12-13 18:23:49.248626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:15.117 [2024-12-13 18:23:49.256795] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:15.117 18:23:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.117 18:23:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:15.117 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.117 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:15.117 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:15.117 [2024-12-13 18:23:49.488767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.117 [2024-12-13 18:23:49.488816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:15.117 [2024-12-13 18:23:49.488830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:15.117 [2024-12-13 18:23:49.488839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.488864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.117 [2024-12-13 18:23:49.488874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:15.117 [2024-12-13 18:23:49.488886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:15.117 [2024-12-13 18:23:49.488895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.488916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.117 [2024-12-13 18:23:49.488924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:15.117 [2024-12-13 18:23:49.488933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:15.117 [2024-12-13 18:23:49.488941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.117 [2024-12-13 18:23:49.488997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.225 ms, result 0 00:29:15.378 true 00:29:15.378 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:15.378 { 00:29:15.378 "name": "ftl", 00:29:15.378 "properties": [ 00:29:15.378 { 00:29:15.378 "name": "superblock_version", 00:29:15.378 "value": 5, 00:29:15.378 "read-only": true 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "name": "base_device", 00:29:15.378 "bands": [ 00:29:15.378 { 00:29:15.378 "id": 0, 00:29:15.378 "state": "CLOSED", 00:29:15.378 "validity": 1.0 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "id": 1, 00:29:15.378 "state": "CLOSED", 00:29:15.378 "validity": 1.0 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "id": 2, 00:29:15.378 "state": "CLOSED", 00:29:15.378 "validity": 0.007843137254901933 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "id": 3, 00:29:15.378 "state": "FREE", 00:29:15.378 "validity": 0.0 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "id": 4, 00:29:15.378 "state": "FREE", 00:29:15.378 "validity": 0.0 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "id": 5, 00:29:15.378 "state": "FREE", 00:29:15.378 "validity": 0.0 00:29:15.378 }, 00:29:15.378 { 00:29:15.378 "id": 6, 00:29:15.378 "state": "FREE", 00:29:15.378 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 7, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 8, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 9, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 10, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 11, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 12, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 13, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 14, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 15, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 16, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 17, 00:29:15.379 "state": "FREE", 00:29:15.379 "validity": 0.0 00:29:15.379 } 00:29:15.379 ], 00:29:15.379 "read-only": true 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "name": "cache_device", 00:29:15.379 "type": "bdev", 00:29:15.379 "chunks": [ 00:29:15.379 { 00:29:15.379 "id": 0, 00:29:15.379 "state": "INACTIVE", 00:29:15.379 "utilization": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 1, 00:29:15.379 "state": "OPEN", 00:29:15.379 "utilization": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 2, 00:29:15.379 "state": "OPEN", 00:29:15.379 "utilization": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 3, 00:29:15.379 "state": "FREE", 00:29:15.379 "utilization": 0.0 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "id": 4, 00:29:15.379 "state": "FREE", 00:29:15.379 "utilization": 0.0 00:29:15.379 } 00:29:15.379 ], 00:29:15.379 "read-only": true 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "name": "verbose_mode", 00:29:15.379 "value": true, 00:29:15.379 "unit": "", 00:29:15.379 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:15.379 }, 00:29:15.379 { 00:29:15.379 "name": "prep_upgrade_on_shutdown", 00:29:15.379 "value": false, 00:29:15.379 "unit": "", 00:29:15.379 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:15.379 } 00:29:15.379 ] 00:29:15.379 } 00:29:15.379 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:15.379 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:15.379 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:15.638 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:15.638 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:15.638 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:15.638 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:15.638 18:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:15.896 Validate MD5 checksum, iteration 1 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:15.896 18:23:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:15.896 [2024-12-13 18:23:50.240132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:15.896 [2024-12-13 18:23:50.240260] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95860 ] 00:29:16.154 [2024-12-13 18:23:50.384326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.154 [2024-12-13 18:23:50.403691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.533  [2024-12-13T18:23:52.853Z] Copying: 652/1024 [MB] (652 MBps) [2024-12-13T18:23:53.426Z] Copying: 1024/1024 [MB] (average 581 MBps) 00:29:19.049 00:29:19.049 18:23:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:19.049 18:23:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:20.949 Validate MD5 checksum, iteration 2 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=af1ef32d9cbb0ff80bf1d76b11cdcefe 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ af1ef32d9cbb0ff80bf1d76b11cdcefe != \a\f\1\e\f\3\2\d\9\c\b\b\0\f\f\8\0\b\f\1\d\7\6\b\1\1\c\d\c\e\f\e ]] 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:20.949 18:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:21.206 [2024-12-13 18:23:55.370347] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:21.206 [2024-12-13 18:23:55.370457] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95920 ] 00:29:21.206 [2024-12-13 18:23:55.515793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.206 [2024-12-13 18:23:55.534475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.583  [2024-12-13T18:23:57.900Z] Copying: 580/1024 [MB] (580 MBps) [2024-12-13T18:23:58.468Z] Copying: 1024/1024 [MB] (average 538 MBps) 00:29:24.091 00:29:24.091 18:23:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:24.091 18:23:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=26729d8897316cf600a7d0544adb7619 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 26729d8897316cf600a7d0544adb7619 != \2\6\7\2\9\d\8\8\9\7\3\1\6\c\f\6\0\0\a\7\d\0\5\4\4\a\d\b\7\6\1\9 ]] 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 95790 ]] 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 95790 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:25.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95976 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95976 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95976 ']' 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:25.989 18:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:26.248 [2024-12-13 18:24:00.408595] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:26.248 [2024-12-13 18:24:00.408897] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95976 ] 00:29:26.248 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 95790 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:26.248 [2024-12-13 18:24:00.548755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.248 [2024-12-13 18:24:00.572204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.506 [2024-12-13 18:24:00.867716] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:26.506 [2024-12-13 18:24:00.867779] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:26.766 [2024-12-13 18:24:01.013876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.766 [2024-12-13 18:24:01.013913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:26.766 [2024-12-13 18:24:01.013926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:26.766 [2024-12-13 18:24:01.013933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.766 [2024-12-13 18:24:01.013976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.766 [2024-12-13 18:24:01.013984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:26.766 [2024-12-13 18:24:01.013992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:26.766 [2024-12-13 18:24:01.013998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.766 [2024-12-13 18:24:01.014012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:26.766 [2024-12-13 18:24:01.014203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:26.766 [2024-12-13 18:24:01.014215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.766 [2024-12-13 18:24:01.014221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:26.766 [2024-12-13 18:24:01.014228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:29:26.766 [2024-12-13 18:24:01.014234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.766 [2024-12-13 18:24:01.014454] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:26.766 [2024-12-13 18:24:01.018996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.766 [2024-12-13 18:24:01.019030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:26.766 [2024-12-13 18:24:01.019038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.542 ms 00:29:26.766 [2024-12-13 18:24:01.019048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.766 [2024-12-13 18:24:01.019993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.766 [2024-12-13 18:24:01.020020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:26.766 [2024-12-13 18:24:01.020029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:26.766 [2024-12-13 18:24:01.020037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.766 [2024-12-13 18:24:01.020260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.766 [2024-12-13 18:24:01.020270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:26.766 [2024-12-13 18:24:01.020277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:29:26.767 [2024-12-13 18:24:01.020283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.020317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.020325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:26.767 [2024-12-13 18:24:01.020331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:26.767 [2024-12-13 18:24:01.020336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.020358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.020368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:26.767 [2024-12-13 18:24:01.020376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:26.767 [2024-12-13 18:24:01.020382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.020398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:26.767 [2024-12-13 18:24:01.021123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.021137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:26.767 [2024-12-13 18:24:01.021144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:29:26.767 [2024-12-13 18:24:01.021151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.021186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.021195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:26.767 [2024-12-13 18:24:01.021201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:26.767 [2024-12-13 18:24:01.021207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.021224] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:26.767 [2024-12-13 18:24:01.021405] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:26.767 [2024-12-13 18:24:01.021466] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:26.767 [2024-12-13 18:24:01.021503] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:26.767 [2024-12-13 18:24:01.021609] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:26.767 [2024-12-13 18:24:01.022029] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:26.767 [2024-12-13 18:24:01.022065] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:26.767 [2024-12-13 18:24:01.022102] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:26.767 [2024-12-13 18:24:01.022127] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:26.767 [2024-12-13 18:24:01.022152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:26.767 [2024-12-13 18:24:01.022167] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:26.767 [2024-12-13 18:24:01.022181] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:26.767 [2024-12-13 18:24:01.022293] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:26.767 [2024-12-13 18:24:01.022317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.022334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:26.767 [2024-12-13 18:24:01.022354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.095 ms 00:29:26.767 [2024-12-13 18:24:01.022368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.022502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.022523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:26.767 [2024-12-13 18:24:01.022547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:26.767 [2024-12-13 18:24:01.022562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.022666] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:26.767 [2024-12-13 18:24:01.022687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:26.767 [2024-12-13 18:24:01.022787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:26.767 [2024-12-13 18:24:01.022807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.022822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:26.767 [2024-12-13 18:24:01.022837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.022880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:26.767 [2024-12-13 18:24:01.022899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:26.767 [2024-12-13 18:24:01.022915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:26.767 [2024-12-13 18:24:01.022929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.022942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:26.767 [2024-12-13 18:24:01.022956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:26.767 [2024-12-13 18:24:01.022970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.022984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:26.767 [2024-12-13 18:24:01.023003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:26.767 [2024-12-13 18:24:01.023017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:26.767 [2024-12-13 18:24:01.023089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:26.767 [2024-12-13 18:24:01.023103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:26.767 [2024-12-13 18:24:01.023132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:26.767 [2024-12-13 18:24:01.023146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.767 [2024-12-13 18:24:01.023159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:26.767 [2024-12-13 18:24:01.023173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:26.767 [2024-12-13 18:24:01.023219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.767 [2024-12-13 18:24:01.023236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:26.767 [2024-12-13 18:24:01.023260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:26.767 [2024-12-13 18:24:01.023278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.767 [2024-12-13 18:24:01.023292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:26.767 [2024-12-13 18:24:01.023299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:26.767 [2024-12-13 18:24:01.023308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:26.767 [2024-12-13 18:24:01.023313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:26.767 [2024-12-13 18:24:01.023318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:26.767 [2024-12-13 18:24:01.023324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:26.767 [2024-12-13 18:24:01.023334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:26.767 [2024-12-13 18:24:01.023339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:26.767 [2024-12-13 18:24:01.023349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:26.767 [2024-12-13 18:24:01.023364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:26.767 [2024-12-13 18:24:01.023369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023374] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:26.767 [2024-12-13 18:24:01.023383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:26.767 [2024-12-13 18:24:01.023390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:26.767 [2024-12-13 18:24:01.023397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:26.767 [2024-12-13 18:24:01.023402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:26.767 [2024-12-13 18:24:01.023408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:26.767 [2024-12-13 18:24:01.023413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:26.767 [2024-12-13 18:24:01.023418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:26.767 [2024-12-13 18:24:01.023423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:26.767 [2024-12-13 18:24:01.023428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:26.767 [2024-12-13 18:24:01.023435] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:26.767 [2024-12-13 18:24:01.023442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:26.767 [2024-12-13 18:24:01.023455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:26.767 [2024-12-13 18:24:01.023473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:26.767 [2024-12-13 18:24:01.023479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:26.767 [2024-12-13 18:24:01.023484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:26.767 [2024-12-13 18:24:01.023493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:26.767 [2024-12-13 18:24:01.023532] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:26.767 [2024-12-13 18:24:01.023539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:26.767 [2024-12-13 18:24:01.023558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:26.767 [2024-12-13 18:24:01.023564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:26.767 [2024-12-13 18:24:01.023570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:26.767 [2024-12-13 18:24:01.023576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.023583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:26.767 [2024-12-13 18:24:01.023589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.963 ms 00:29:26.767 [2024-12-13 18:24:01.023596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.032040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.032069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:26.767 [2024-12-13 18:24:01.032077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.401 ms 00:29:26.767 [2024-12-13 18:24:01.032083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.032114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.032121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:26.767 [2024-12-13 18:24:01.032128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:26.767 [2024-12-13 18:24:01.032135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.042150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.042286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:26.767 [2024-12-13 18:24:01.042299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.970 ms 00:29:26.767 [2024-12-13 18:24:01.042306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.042330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.042337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:26.767 [2024-12-13 18:24:01.042347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:26.767 [2024-12-13 18:24:01.042353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.042434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.042446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:26.767 [2024-12-13 18:24:01.042453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:26.767 [2024-12-13 18:24:01.042462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.042497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.042505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:26.767 [2024-12-13 18:24:01.042511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:26.767 [2024-12-13 18:24:01.042518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.049109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.049136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:26.767 [2024-12-13 18:24:01.049144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.573 ms 00:29:26.767 [2024-12-13 18:24:01.049150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.049257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.049267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:26.767 [2024-12-13 18:24:01.049277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:26.767 [2024-12-13 18:24:01.049283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.066501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.066575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:26.767 [2024-12-13 18:24:01.066615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.199 ms 00:29:26.767 [2024-12-13 18:24:01.066633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.068929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.068984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:26.767 [2024-12-13 18:24:01.069013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:29:26.767 [2024-12-13 18:24:01.069032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.088277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.088311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:26.767 [2024-12-13 18:24:01.088325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.171 ms 00:29:26.767 [2024-12-13 18:24:01.088333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.088448] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:26.767 [2024-12-13 18:24:01.088542] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:26.767 [2024-12-13 18:24:01.088633] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:26.767 [2024-12-13 18:24:01.088724] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:26.767 [2024-12-13 18:24:01.088731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.088739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:26.767 [2024-12-13 18:24:01.088749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:29:26.767 [2024-12-13 18:24:01.088755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.088796] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:26.767 [2024-12-13 18:24:01.088806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.088814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:26.767 [2024-12-13 18:24:01.088827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:26.767 [2024-12-13 18:24:01.088833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.092044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.092363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:26.767 [2024-12-13 18:24:01.092384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.192 ms 00:29:26.767 [2024-12-13 18:24:01.092396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.767 [2024-12-13 18:24:01.092947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.767 [2024-12-13 18:24:01.092963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:26.768 [2024-12-13 18:24:01.092971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:26.768 [2024-12-13 18:24:01.092978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.768 [2024-12-13 18:24:01.093048] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:26.768 [2024-12-13 18:24:01.093227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.768 [2024-12-13 18:24:01.093237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:26.768 [2024-12-13 18:24:01.093272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:29:26.768 [2024-12-13 18:24:01.093281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.706 [2024-12-13 18:24:02.060027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.706 [2024-12-13 18:24:02.060387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:27.706 [2024-12-13 18:24:02.060761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 966.477 ms 00:29:27.706 [2024-12-13 18:24:02.060806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.706 [2024-12-13 18:24:02.062836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.706 [2024-12-13 18:24:02.062984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:27.706 [2024-12-13 18:24:02.063055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.262 ms 00:29:27.706 [2024-12-13 18:24:02.063074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.706 [2024-12-13 18:24:02.063898] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:27.706 [2024-12-13 18:24:02.064049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.706 [2024-12-13 18:24:02.064102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:27.706 [2024-12-13 18:24:02.064124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.932 ms 00:29:27.706 [2024-12-13 18:24:02.064140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.706 [2024-12-13 18:24:02.064182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.706 [2024-12-13 18:24:02.064218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:27.706 [2024-12-13 18:24:02.064235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:27.706 [2024-12-13 18:24:02.064266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.706 [2024-12-13 18:24:02.064312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 971.262 ms, result 0 00:29:27.706 [2024-12-13 18:24:02.064477] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:27.706 [2024-12-13 18:24:02.064766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.706 [2024-12-13 18:24:02.064865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:27.706 [2024-12-13 18:24:02.064921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:29:27.706 [2024-12-13 18:24:02.064940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.831915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.832327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:28.646 [2024-12-13 18:24:02.832505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 766.300 ms 00:29:28.646 [2024-12-13 18:24:02.832535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.834940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.835127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:28.646 [2024-12-13 18:24:02.835196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.698 ms 00:29:28.646 [2024-12-13 18:24:02.835221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.836141] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:28.646 [2024-12-13 18:24:02.836347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.836413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:28.646 [2024-12-13 18:24:02.836440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.060 ms 00:29:28.646 [2024-12-13 18:24:02.836469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.836634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.836701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:28.646 [2024-12-13 18:24:02.836725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:28.646 [2024-12-13 18:24:02.836746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.836810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 772.324 ms, result 0 00:29:28.646 [2024-12-13 18:24:02.836990] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:28.646 [2024-12-13 18:24:02.837031] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:28.646 [2024-12-13 18:24:02.837065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.837089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:28.646 [2024-12-13 18:24:02.837111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1744.035 ms 00:29:28.646 [2024-12-13 18:24:02.837137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.837286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.837320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:28.646 [2024-12-13 18:24:02.837342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:28.646 [2024-12-13 18:24:02.837364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.847877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:28.646 [2024-12-13 18:24:02.848152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.848177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:28.646 [2024-12-13 18:24:02.848189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.759 ms 00:29:28.646 [2024-12-13 18:24:02.848198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.849003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.849037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:28.646 [2024-12-13 18:24:02.849049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.677 ms 00:29:28.646 [2024-12-13 18:24:02.849057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.851376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.851419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:28.646 [2024-12-13 18:24:02.851440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.302 ms 00:29:28.646 [2024-12-13 18:24:02.851448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.851496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.851507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:28.646 [2024-12-13 18:24:02.851517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:28.646 [2024-12-13 18:24:02.851527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.851648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.851660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:28.646 [2024-12-13 18:24:02.851673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:28.646 [2024-12-13 18:24:02.851682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.851706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.851719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:28.646 [2024-12-13 18:24:02.851730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:28.646 [2024-12-13 18:24:02.851738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.851785] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:28.646 [2024-12-13 18:24:02.851802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.851811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:28.646 [2024-12-13 18:24:02.851824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:28.646 [2024-12-13 18:24:02.851838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.851895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.646 [2024-12-13 18:24:02.851907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:28.646 [2024-12-13 18:24:02.851917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:28.646 [2024-12-13 18:24:02.851926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.646 [2024-12-13 18:24:02.853529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1838.975 ms, result 0 00:29:28.646 [2024-12-13 18:24:02.868751] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.646 [2024-12-13 18:24:02.884727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:28.646 [2024-12-13 18:24:02.892908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:28.646 Validate MD5 checksum, iteration 1 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:28.646 18:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.907 [2024-12-13 18:24:03.038973] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:28.907 [2024-12-13 18:24:03.039329] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96011 ] 00:29:28.907 [2024-12-13 18:24:03.187540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.907 [2024-12-13 18:24:03.216729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.290  [2024-12-13T18:24:05.619Z] Copying: 489/1024 [MB] (489 MBps) [2024-12-13T18:24:05.619Z] Copying: 1014/1024 [MB] (525 MBps) [2024-12-13T18:24:06.190Z] Copying: 1024/1024 [MB] (average 508 MBps) 00:29:31.813 00:29:32.105 18:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:32.105 18:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=af1ef32d9cbb0ff80bf1d76b11cdcefe 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ af1ef32d9cbb0ff80bf1d76b11cdcefe != \a\f\1\e\f\3\2\d\9\c\b\b\0\f\f\8\0\b\f\1\d\7\6\b\1\1\c\d\c\e\f\e ]] 00:29:34.031 Validate MD5 checksum, iteration 2 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.031 18:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.290 [2024-12-13 18:24:08.440037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:34.290 [2024-12-13 18:24:08.440155] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96072 ] 00:29:34.290 [2024-12-13 18:24:08.585227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.290 [2024-12-13 18:24:08.604552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.671  [2024-12-13T18:24:10.993Z] Copying: 533/1024 [MB] (533 MBps) [2024-12-13T18:24:15.194Z] Copying: 1024/1024 [MB] (average 518 MBps) 00:29:40.817 00:29:40.817 18:24:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:40.817 18:24:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:42.201 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=26729d8897316cf600a7d0544adb7619 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 26729d8897316cf600a7d0544adb7619 != \2\6\7\2\9\d\8\8\9\7\3\1\6\c\f\6\0\0\a\7\d\0\5\4\4\a\d\b\7\6\1\9 ]] 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:42.202 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95976 ]] 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95976 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95976 ']' 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95976 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95976 00:29:42.464 killing process with pid 95976 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95976' 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95976 00:29:42.464 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95976 00:29:42.464 [2024-12-13 18:24:16.789585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:42.464 [2024-12-13 18:24:16.795534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.795569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:42.464 [2024-12-13 18:24:16.795579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:42.464 [2024-12-13 18:24:16.795585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.795602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:42.464 [2024-12-13 18:24:16.795980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.795993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:42.464 [2024-12-13 18:24:16.796003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.367 ms 00:29:42.464 [2024-12-13 18:24:16.796010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.796194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.796202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:42.464 [2024-12-13 18:24:16.796208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.168 ms 00:29:42.464 [2024-12-13 18:24:16.796214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.797607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.797708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:42.464 [2024-12-13 18:24:16.797757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.380 ms 00:29:42.464 [2024-12-13 18:24:16.797780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.798671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.798737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:42.464 [2024-12-13 18:24:16.798776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:29:42.464 [2024-12-13 18:24:16.798795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.800156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.800276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:42.464 [2024-12-13 18:24:16.800322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.321 ms 00:29:42.464 [2024-12-13 18:24:16.800373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.801746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.801841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:42.464 [2024-12-13 18:24:16.801883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.337 ms 00:29:42.464 [2024-12-13 18:24:16.801899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.801968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.802084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:42.464 [2024-12-13 18:24:16.802110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:42.464 [2024-12-13 18:24:16.802129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.803213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.803323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:42.464 [2024-12-13 18:24:16.803335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.057 ms 00:29:42.464 [2024-12-13 18:24:16.803341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.804505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.804528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:42.464 [2024-12-13 18:24:16.804535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:29:42.464 [2024-12-13 18:24:16.804540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.805390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.805479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:42.464 [2024-12-13 18:24:16.805490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.826 ms 00:29:42.464 [2024-12-13 18:24:16.805496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.806689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.464 [2024-12-13 18:24:16.806712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:42.464 [2024-12-13 18:24:16.806719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.138 ms 00:29:42.464 [2024-12-13 18:24:16.806724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.464 [2024-12-13 18:24:16.806748] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:42.464 [2024-12-13 18:24:16.806759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:42.464 [2024-12-13 18:24:16.806766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:42.464 [2024-12-13 18:24:16.806772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:42.464 [2024-12-13 18:24:16.806779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:42.464 [2024-12-13 18:24:16.806866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:42.464 [2024-12-13 18:24:16.806872] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d3a22197-3b2b-4bc7-ae5a-99d140cd7a75 00:29:42.464 [2024-12-13 18:24:16.806883] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:42.464 [2024-12-13 18:24:16.806889] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:42.464 [2024-12-13 18:24:16.806894] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:42.464 [2024-12-13 18:24:16.806900] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:42.464 [2024-12-13 18:24:16.806905] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:42.465 [2024-12-13 18:24:16.806912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:42.465 [2024-12-13 18:24:16.806920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:42.465 [2024-12-13 18:24:16.806925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:42.465 [2024-12-13 18:24:16.806929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:42.465 [2024-12-13 18:24:16.806935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.465 [2024-12-13 18:24:16.806941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:42.465 [2024-12-13 18:24:16.806947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:29:42.465 [2024-12-13 18:24:16.806953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.808251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.465 [2024-12-13 18:24:16.808265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:42.465 [2024-12-13 18:24:16.808272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.276 ms 00:29:42.465 [2024-12-13 18:24:16.808278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.808350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.465 [2024-12-13 18:24:16.808356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:42.465 [2024-12-13 18:24:16.808362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:42.465 [2024-12-13 18:24:16.808368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.812940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.812967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:42.465 [2024-12-13 18:24:16.812974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.812983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.813004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.813011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:42.465 [2024-12-13 18:24:16.813017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.813022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.813074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.813081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:42.465 [2024-12-13 18:24:16.813087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.813093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.813109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.813116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:42.465 [2024-12-13 18:24:16.813122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.813127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.821354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.821388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:42.465 [2024-12-13 18:24:16.821396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.821401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.827624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.827775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:42.465 [2024-12-13 18:24:16.827787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.827793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.827833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.827841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:42.465 [2024-12-13 18:24:16.827847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.827853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.827895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.827905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:42.465 [2024-12-13 18:24:16.827910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.827915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.827969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.827976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:42.465 [2024-12-13 18:24:16.827982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.827987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.828010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.828017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:42.465 [2024-12-13 18:24:16.828024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.828030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.828063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.828070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:42.465 [2024-12-13 18:24:16.828075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.828081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.828113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:42.465 [2024-12-13 18:24:16.828122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:42.465 [2024-12-13 18:24:16.828128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:42.465 [2024-12-13 18:24:16.828133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.465 [2024-12-13 18:24:16.828226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 32.670 ms, result 0 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:42.726 Remove shared memory files 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid95790 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:42.726 ************************************ 00:29:42.726 END TEST ftl_upgrade_shutdown 00:29:42.726 ************************************ 00:29:42.726 00:29:42.726 real 1m17.788s 00:29:42.726 user 1m41.594s 00:29:42.726 sys 0m21.092s 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.726 18:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.726 18:24:17 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:42.726 18:24:17 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:42.726 18:24:17 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:42.726 18:24:17 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.726 18:24:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:42.726 ************************************ 00:29:42.726 START TEST ftl_restore_fast 00:29:42.726 ************************************ 00:29:42.726 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:42.988 * Looking for test storage... 00:29:42.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.988 --rc genhtml_branch_coverage=1 00:29:42.988 --rc genhtml_function_coverage=1 00:29:42.988 --rc genhtml_legend=1 00:29:42.988 --rc geninfo_all_blocks=1 00:29:42.988 --rc geninfo_unexecuted_blocks=1 00:29:42.988 00:29:42.988 ' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.988 --rc genhtml_branch_coverage=1 00:29:42.988 --rc genhtml_function_coverage=1 00:29:42.988 --rc genhtml_legend=1 00:29:42.988 --rc geninfo_all_blocks=1 00:29:42.988 --rc geninfo_unexecuted_blocks=1 00:29:42.988 00:29:42.988 ' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.988 --rc genhtml_branch_coverage=1 00:29:42.988 --rc genhtml_function_coverage=1 00:29:42.988 --rc genhtml_legend=1 00:29:42.988 --rc geninfo_all_blocks=1 00:29:42.988 --rc geninfo_unexecuted_blocks=1 00:29:42.988 00:29:42.988 ' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.988 --rc genhtml_branch_coverage=1 00:29:42.988 --rc genhtml_function_coverage=1 00:29:42.988 --rc genhtml_legend=1 00:29:42.988 --rc geninfo_all_blocks=1 00:29:42.988 --rc geninfo_unexecuted_blocks=1 00:29:42.988 00:29:42.988 ' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:42.988 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.8vfzDNPmDf 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96241 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96241 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 96241 ']' 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.989 18:24:17 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:42.989 [2024-12-13 18:24:17.298258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:42.989 [2024-12-13 18:24:17.298535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96241 ] 00:29:43.250 [2024-12-13 18:24:17.442313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.250 [2024-12-13 18:24:17.461997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:43.821 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:44.082 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:44.343 { 00:29:44.343 "name": "nvme0n1", 00:29:44.343 "aliases": [ 00:29:44.343 "9d0f960d-7bb1-4e6f-8ac3-fa1c9f07e3c9" 00:29:44.343 ], 00:29:44.343 "product_name": "NVMe disk", 00:29:44.343 "block_size": 4096, 00:29:44.343 "num_blocks": 1310720, 00:29:44.343 "uuid": "9d0f960d-7bb1-4e6f-8ac3-fa1c9f07e3c9", 00:29:44.343 "numa_id": -1, 00:29:44.343 "assigned_rate_limits": { 00:29:44.343 "rw_ios_per_sec": 0, 00:29:44.343 "rw_mbytes_per_sec": 0, 00:29:44.343 "r_mbytes_per_sec": 0, 00:29:44.343 "w_mbytes_per_sec": 0 00:29:44.343 }, 00:29:44.343 "claimed": true, 00:29:44.343 "claim_type": "read_many_write_one", 00:29:44.343 "zoned": false, 00:29:44.343 "supported_io_types": { 00:29:44.343 "read": true, 00:29:44.343 "write": true, 00:29:44.343 "unmap": true, 00:29:44.343 "flush": true, 00:29:44.343 "reset": true, 00:29:44.343 "nvme_admin": true, 00:29:44.343 "nvme_io": true, 00:29:44.343 "nvme_io_md": false, 00:29:44.343 "write_zeroes": true, 00:29:44.343 "zcopy": false, 00:29:44.343 "get_zone_info": false, 00:29:44.343 "zone_management": false, 00:29:44.343 "zone_append": false, 00:29:44.343 "compare": true, 00:29:44.343 "compare_and_write": false, 00:29:44.343 "abort": true, 00:29:44.343 "seek_hole": false, 00:29:44.343 "seek_data": false, 00:29:44.343 "copy": true, 00:29:44.343 "nvme_iov_md": false 00:29:44.343 }, 00:29:44.343 "driver_specific": { 00:29:44.343 "nvme": [ 00:29:44.343 { 00:29:44.343 "pci_address": "0000:00:11.0", 00:29:44.343 "trid": { 00:29:44.343 "trtype": "PCIe", 00:29:44.343 "traddr": "0000:00:11.0" 00:29:44.343 }, 00:29:44.343 "ctrlr_data": { 00:29:44.343 "cntlid": 0, 00:29:44.343 "vendor_id": "0x1b36", 00:29:44.343 "model_number": "QEMU NVMe Ctrl", 00:29:44.343 "serial_number": "12341", 00:29:44.343 "firmware_revision": "8.0.0", 00:29:44.343 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:44.343 "oacs": { 00:29:44.343 "security": 0, 00:29:44.343 "format": 1, 00:29:44.343 "firmware": 0, 00:29:44.343 "ns_manage": 1 00:29:44.343 }, 00:29:44.343 "multi_ctrlr": false, 00:29:44.343 "ana_reporting": false 00:29:44.343 }, 00:29:44.343 "vs": { 00:29:44.343 "nvme_version": "1.4" 00:29:44.343 }, 00:29:44.343 "ns_data": { 00:29:44.343 "id": 1, 00:29:44.343 "can_share": false 00:29:44.343 } 00:29:44.343 } 00:29:44.343 ], 00:29:44.343 "mp_policy": "active_passive" 00:29:44.343 } 00:29:44.343 } 00:29:44.343 ]' 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:44.343 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:44.604 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=4568ae64-c0da-47a5-81d0-f38616632a8c 00:29:44.604 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:44.604 18:24:18 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4568ae64-c0da-47a5-81d0-f38616632a8c 00:29:44.865 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:45.126 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=d1e1815b-8553-4ca3-aed5-bee4aea02d76 00:29:45.126 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d1e1815b-8553-4ca3-aed5-bee4aea02d76 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:45.387 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:45.388 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:45.388 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.388 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:45.388 { 00:29:45.388 "name": "1eca1cf4-0b99-42cf-abc2-81d94c753491", 00:29:45.388 "aliases": [ 00:29:45.388 "lvs/nvme0n1p0" 00:29:45.388 ], 00:29:45.388 "product_name": "Logical Volume", 00:29:45.388 "block_size": 4096, 00:29:45.388 "num_blocks": 26476544, 00:29:45.388 "uuid": "1eca1cf4-0b99-42cf-abc2-81d94c753491", 00:29:45.388 "assigned_rate_limits": { 00:29:45.388 "rw_ios_per_sec": 0, 00:29:45.388 "rw_mbytes_per_sec": 0, 00:29:45.388 "r_mbytes_per_sec": 0, 00:29:45.388 "w_mbytes_per_sec": 0 00:29:45.388 }, 00:29:45.388 "claimed": false, 00:29:45.388 "zoned": false, 00:29:45.388 "supported_io_types": { 00:29:45.388 "read": true, 00:29:45.388 "write": true, 00:29:45.388 "unmap": true, 00:29:45.388 "flush": false, 00:29:45.388 "reset": true, 00:29:45.388 "nvme_admin": false, 00:29:45.388 "nvme_io": false, 00:29:45.388 "nvme_io_md": false, 00:29:45.388 "write_zeroes": true, 00:29:45.388 "zcopy": false, 00:29:45.388 "get_zone_info": false, 00:29:45.388 "zone_management": false, 00:29:45.388 "zone_append": false, 00:29:45.388 "compare": false, 00:29:45.388 "compare_and_write": false, 00:29:45.388 "abort": false, 00:29:45.388 "seek_hole": true, 00:29:45.388 "seek_data": true, 00:29:45.388 "copy": false, 00:29:45.388 "nvme_iov_md": false 00:29:45.388 }, 00:29:45.388 "driver_specific": { 00:29:45.388 "lvol": { 00:29:45.388 "lvol_store_uuid": "d1e1815b-8553-4ca3-aed5-bee4aea02d76", 00:29:45.388 "base_bdev": "nvme0n1", 00:29:45.388 "thin_provision": true, 00:29:45.388 "num_allocated_clusters": 0, 00:29:45.388 "snapshot": false, 00:29:45.388 "clone": false, 00:29:45.388 "esnap_clone": false 00:29:45.388 } 00:29:45.388 } 00:29:45.388 } 00:29:45.388 ]' 00:29:45.388 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:45.648 18:24:19 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:45.909 { 00:29:45.909 "name": "1eca1cf4-0b99-42cf-abc2-81d94c753491", 00:29:45.909 "aliases": [ 00:29:45.909 "lvs/nvme0n1p0" 00:29:45.909 ], 00:29:45.909 "product_name": "Logical Volume", 00:29:45.909 "block_size": 4096, 00:29:45.909 "num_blocks": 26476544, 00:29:45.909 "uuid": "1eca1cf4-0b99-42cf-abc2-81d94c753491", 00:29:45.909 "assigned_rate_limits": { 00:29:45.909 "rw_ios_per_sec": 0, 00:29:45.909 "rw_mbytes_per_sec": 0, 00:29:45.909 "r_mbytes_per_sec": 0, 00:29:45.909 "w_mbytes_per_sec": 0 00:29:45.909 }, 00:29:45.909 "claimed": false, 00:29:45.909 "zoned": false, 00:29:45.909 "supported_io_types": { 00:29:45.909 "read": true, 00:29:45.909 "write": true, 00:29:45.909 "unmap": true, 00:29:45.909 "flush": false, 00:29:45.909 "reset": true, 00:29:45.909 "nvme_admin": false, 00:29:45.909 "nvme_io": false, 00:29:45.909 "nvme_io_md": false, 00:29:45.909 "write_zeroes": true, 00:29:45.909 "zcopy": false, 00:29:45.909 "get_zone_info": false, 00:29:45.909 "zone_management": false, 00:29:45.909 "zone_append": false, 00:29:45.909 "compare": false, 00:29:45.909 "compare_and_write": false, 00:29:45.909 "abort": false, 00:29:45.909 "seek_hole": true, 00:29:45.909 "seek_data": true, 00:29:45.909 "copy": false, 00:29:45.909 "nvme_iov_md": false 00:29:45.909 }, 00:29:45.909 "driver_specific": { 00:29:45.909 "lvol": { 00:29:45.909 "lvol_store_uuid": "d1e1815b-8553-4ca3-aed5-bee4aea02d76", 00:29:45.909 "base_bdev": "nvme0n1", 00:29:45.909 "thin_provision": true, 00:29:45.909 "num_allocated_clusters": 0, 00:29:45.909 "snapshot": false, 00:29:45.909 "clone": false, 00:29:45.909 "esnap_clone": false 00:29:45.909 } 00:29:45.909 } 00:29:45.909 } 00:29:45.909 ]' 00:29:45.909 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:46.170 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1eca1cf4-0b99-42cf-abc2-81d94c753491 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:46.432 { 00:29:46.432 "name": "1eca1cf4-0b99-42cf-abc2-81d94c753491", 00:29:46.432 "aliases": [ 00:29:46.432 "lvs/nvme0n1p0" 00:29:46.432 ], 00:29:46.432 "product_name": "Logical Volume", 00:29:46.432 "block_size": 4096, 00:29:46.432 "num_blocks": 26476544, 00:29:46.432 "uuid": "1eca1cf4-0b99-42cf-abc2-81d94c753491", 00:29:46.432 "assigned_rate_limits": { 00:29:46.432 "rw_ios_per_sec": 0, 00:29:46.432 "rw_mbytes_per_sec": 0, 00:29:46.432 "r_mbytes_per_sec": 0, 00:29:46.432 "w_mbytes_per_sec": 0 00:29:46.432 }, 00:29:46.432 "claimed": false, 00:29:46.432 "zoned": false, 00:29:46.432 "supported_io_types": { 00:29:46.432 "read": true, 00:29:46.432 "write": true, 00:29:46.432 "unmap": true, 00:29:46.432 "flush": false, 00:29:46.432 "reset": true, 00:29:46.432 "nvme_admin": false, 00:29:46.432 "nvme_io": false, 00:29:46.432 "nvme_io_md": false, 00:29:46.432 "write_zeroes": true, 00:29:46.432 "zcopy": false, 00:29:46.432 "get_zone_info": false, 00:29:46.432 "zone_management": false, 00:29:46.432 "zone_append": false, 00:29:46.432 "compare": false, 00:29:46.432 "compare_and_write": false, 00:29:46.432 "abort": false, 00:29:46.432 "seek_hole": true, 00:29:46.432 "seek_data": true, 00:29:46.432 "copy": false, 00:29:46.432 "nvme_iov_md": false 00:29:46.432 }, 00:29:46.432 "driver_specific": { 00:29:46.432 "lvol": { 00:29:46.432 "lvol_store_uuid": "d1e1815b-8553-4ca3-aed5-bee4aea02d76", 00:29:46.432 "base_bdev": "nvme0n1", 00:29:46.432 "thin_provision": true, 00:29:46.432 "num_allocated_clusters": 0, 00:29:46.432 "snapshot": false, 00:29:46.432 "clone": false, 00:29:46.432 "esnap_clone": false 00:29:46.432 } 00:29:46.432 } 00:29:46.432 } 00:29:46.432 ]' 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1eca1cf4-0b99-42cf-abc2-81d94c753491 --l2p_dram_limit 10' 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:46.432 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:46.433 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:46.433 18:24:20 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1eca1cf4-0b99-42cf-abc2-81d94c753491 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:46.693 [2024-12-13 18:24:20.980923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.981064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:46.693 [2024-12-13 18:24:20.981081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:46.693 [2024-12-13 18:24:20.981089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.981144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.981157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.693 [2024-12-13 18:24:20.981165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:46.693 [2024-12-13 18:24:20.981174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.981195] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:46.693 [2024-12-13 18:24:20.981438] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:46.693 [2024-12-13 18:24:20.981451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.981461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.693 [2024-12-13 18:24:20.981468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:29:46.693 [2024-12-13 18:24:20.981477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.981558] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b1f80ed3-7057-4d48-a05a-a50f61545960 00:29:46.693 [2024-12-13 18:24:20.982522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.982549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:46.693 [2024-12-13 18:24:20.982560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:46.693 [2024-12-13 18:24:20.982569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.987502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.987607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.693 [2024-12-13 18:24:20.987624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.898 ms 00:29:46.693 [2024-12-13 18:24:20.987631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.987692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.987699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.693 [2024-12-13 18:24:20.987707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:46.693 [2024-12-13 18:24:20.987712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.987759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.987770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:46.693 [2024-12-13 18:24:20.987778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:46.693 [2024-12-13 18:24:20.987783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.987800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:46.693 [2024-12-13 18:24:20.989069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.989097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.693 [2024-12-13 18:24:20.989104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:29:46.693 [2024-12-13 18:24:20.989112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.989138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.693 [2024-12-13 18:24:20.989146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:46.693 [2024-12-13 18:24:20.989152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:46.693 [2024-12-13 18:24:20.989161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.693 [2024-12-13 18:24:20.989174] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:46.693 [2024-12-13 18:24:20.989305] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:46.693 [2024-12-13 18:24:20.989315] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:46.693 [2024-12-13 18:24:20.989325] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:46.693 [2024-12-13 18:24:20.989333] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:46.693 [2024-12-13 18:24:20.989345] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:46.693 [2024-12-13 18:24:20.989351] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:46.693 [2024-12-13 18:24:20.989360] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:46.694 [2024-12-13 18:24:20.989369] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:46.694 [2024-12-13 18:24:20.989376] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:46.694 [2024-12-13 18:24:20.989381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.694 [2024-12-13 18:24:20.989391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:46.694 [2024-12-13 18:24:20.989396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:29:46.694 [2024-12-13 18:24:20.989403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.694 [2024-12-13 18:24:20.989468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.694 [2024-12-13 18:24:20.989477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:46.694 [2024-12-13 18:24:20.989486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:46.694 [2024-12-13 18:24:20.989493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.694 [2024-12-13 18:24:20.989570] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:46.694 [2024-12-13 18:24:20.989579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:46.694 [2024-12-13 18:24:20.989585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:46.694 [2024-12-13 18:24:20.989605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:46.694 [2024-12-13 18:24:20.989621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:46.694 [2024-12-13 18:24:20.989633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:46.694 [2024-12-13 18:24:20.989640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:46.694 [2024-12-13 18:24:20.989645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:46.694 [2024-12-13 18:24:20.989653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:46.694 [2024-12-13 18:24:20.989658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:46.694 [2024-12-13 18:24:20.989665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:46.694 [2024-12-13 18:24:20.989676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:46.694 [2024-12-13 18:24:20.989693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:46.694 [2024-12-13 18:24:20.989712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:46.694 [2024-12-13 18:24:20.989731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:46.694 [2024-12-13 18:24:20.989753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:46.694 [2024-12-13 18:24:20.989772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:46.694 [2024-12-13 18:24:20.989785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:46.694 [2024-12-13 18:24:20.989792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:46.694 [2024-12-13 18:24:20.989797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:46.694 [2024-12-13 18:24:20.989804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:46.694 [2024-12-13 18:24:20.989810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:46.694 [2024-12-13 18:24:20.989817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:46.694 [2024-12-13 18:24:20.989830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:46.694 [2024-12-13 18:24:20.989836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989843] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:46.694 [2024-12-13 18:24:20.989853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:46.694 [2024-12-13 18:24:20.989862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.694 [2024-12-13 18:24:20.989876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:46.694 [2024-12-13 18:24:20.989882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:46.694 [2024-12-13 18:24:20.989889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:46.694 [2024-12-13 18:24:20.989894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:46.694 [2024-12-13 18:24:20.989903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:46.694 [2024-12-13 18:24:20.989910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:46.694 [2024-12-13 18:24:20.989918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:46.694 [2024-12-13 18:24:20.989928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:46.694 [2024-12-13 18:24:20.989939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:46.694 [2024-12-13 18:24:20.989946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:46.694 [2024-12-13 18:24:20.989953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:46.694 [2024-12-13 18:24:20.989959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:46.694 [2024-12-13 18:24:20.989968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:46.694 [2024-12-13 18:24:20.989974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:46.694 [2024-12-13 18:24:20.989983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:46.694 [2024-12-13 18:24:20.989989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:46.694 [2024-12-13 18:24:20.989997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:46.694 [2024-12-13 18:24:20.990003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:46.694 [2024-12-13 18:24:20.990011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:46.694 [2024-12-13 18:24:20.990018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:46.694 [2024-12-13 18:24:20.990025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:46.694 [2024-12-13 18:24:20.990031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:46.694 [2024-12-13 18:24:20.990038] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:46.694 [2024-12-13 18:24:20.990046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:46.694 [2024-12-13 18:24:20.990054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:46.694 [2024-12-13 18:24:20.990061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:46.694 [2024-12-13 18:24:20.990069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:46.694 [2024-12-13 18:24:20.990075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:46.694 [2024-12-13 18:24:20.990085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.694 [2024-12-13 18:24:20.990091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:46.694 [2024-12-13 18:24:20.990103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:29:46.694 [2024-12-13 18:24:20.990109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.694 [2024-12-13 18:24:20.990139] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:46.694 [2024-12-13 18:24:20.990146] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:50.907 [2024-12-13 18:24:24.662376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.662709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:50.907 [2024-12-13 18:24:24.662743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3672.213 ms 00:29:50.907 [2024-12-13 18:24:24.662754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.676666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.676717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:50.907 [2024-12-13 18:24:24.676740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.786 ms 00:29:50.907 [2024-12-13 18:24:24.676753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.676875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.676889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:50.907 [2024-12-13 18:24:24.676901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:50.907 [2024-12-13 18:24:24.676912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.689064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.689280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:50.907 [2024-12-13 18:24:24.689306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.080 ms 00:29:50.907 [2024-12-13 18:24:24.689324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.689363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.689373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:50.907 [2024-12-13 18:24:24.689383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:50.907 [2024-12-13 18:24:24.689391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.689935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.689957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:50.907 [2024-12-13 18:24:24.689970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:29:50.907 [2024-12-13 18:24:24.689979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.690111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.690122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:50.907 [2024-12-13 18:24:24.690134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:50.907 [2024-12-13 18:24:24.690143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.698215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.698298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:50.907 [2024-12-13 18:24:24.698313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.046 ms 00:29:50.907 [2024-12-13 18:24:24.698321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.718113] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:50.907 [2024-12-13 18:24:24.722263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.722318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:50.907 [2024-12-13 18:24:24.722333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.872 ms 00:29:50.907 [2024-12-13 18:24:24.722345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.806634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.806706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:50.907 [2024-12-13 18:24:24.806724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.235 ms 00:29:50.907 [2024-12-13 18:24:24.806738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.806953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.806967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:50.907 [2024-12-13 18:24:24.806982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:29:50.907 [2024-12-13 18:24:24.806992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.812919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.812979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:50.907 [2024-12-13 18:24:24.812994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.888 ms 00:29:50.907 [2024-12-13 18:24:24.813005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.818148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.818203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:50.907 [2024-12-13 18:24:24.818215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.093 ms 00:29:50.907 [2024-12-13 18:24:24.818225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.907 [2024-12-13 18:24:24.818600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.907 [2024-12-13 18:24:24.818616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:50.907 [2024-12-13 18:24:24.818626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:29:50.907 [2024-12-13 18:24:24.818638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.908 [2024-12-13 18:24:24.865862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.908 [2024-12-13 18:24:24.865923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:50.908 [2024-12-13 18:24:24.865940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.201 ms 00:29:50.908 [2024-12-13 18:24:24.865956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.908 [2024-12-13 18:24:24.873083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.908 [2024-12-13 18:24:24.873407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:50.908 [2024-12-13 18:24:24.873435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.048 ms 00:29:50.908 [2024-12-13 18:24:24.873447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.908 [2024-12-13 18:24:24.879404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.908 [2024-12-13 18:24:24.879462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:50.908 [2024-12-13 18:24:24.879472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.909 ms 00:29:50.908 [2024-12-13 18:24:24.879482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.908 [2024-12-13 18:24:24.885807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.908 [2024-12-13 18:24:24.885864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:50.908 [2024-12-13 18:24:24.885875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.276 ms 00:29:50.908 [2024-12-13 18:24:24.885888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.908 [2024-12-13 18:24:24.885941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.908 [2024-12-13 18:24:24.885954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:50.908 [2024-12-13 18:24:24.885967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:50.908 [2024-12-13 18:24:24.885977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.908 [2024-12-13 18:24:24.886068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.908 [2024-12-13 18:24:24.886081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:50.908 [2024-12-13 18:24:24.886090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:50.908 [2024-12-13 18:24:24.886103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.908 [2024-12-13 18:24:24.887491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3906.051 ms, result 0 00:29:50.908 { 00:29:50.908 "name": "ftl0", 00:29:50.908 "uuid": "b1f80ed3-7057-4d48-a05a-a50f61545960" 00:29:50.908 } 00:29:50.908 18:24:24 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:50.908 18:24:24 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:50.908 18:24:25 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:50.908 18:24:25 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:51.170 [2024-12-13 18:24:25.306708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.306769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:51.170 [2024-12-13 18:24:25.306789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:51.170 [2024-12-13 18:24:25.306797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.306825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:51.170 [2024-12-13 18:24:25.307599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.307644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:51.170 [2024-12-13 18:24:25.307662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:29:51.170 [2024-12-13 18:24:25.307673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.307947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.307968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:51.170 [2024-12-13 18:24:25.307978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:29:51.170 [2024-12-13 18:24:25.307992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.311267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.311294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:51.170 [2024-12-13 18:24:25.311304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:29:51.170 [2024-12-13 18:24:25.311315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.317592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.317639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:51.170 [2024-12-13 18:24:25.317651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.259 ms 00:29:51.170 [2024-12-13 18:24:25.317665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.320691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.320754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:51.170 [2024-12-13 18:24:25.320765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:29:51.170 [2024-12-13 18:24:25.320775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.327098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.327323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:51.170 [2024-12-13 18:24:25.327344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.275 ms 00:29:51.170 [2024-12-13 18:24:25.327356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.327493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.327507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:51.170 [2024-12-13 18:24:25.327523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:29:51.170 [2024-12-13 18:24:25.327533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.330867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.331039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:51.170 [2024-12-13 18:24:25.331056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.309 ms 00:29:51.170 [2024-12-13 18:24:25.331066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.333945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.334004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:51.170 [2024-12-13 18:24:25.334014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:29:51.170 [2024-12-13 18:24:25.334023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.336326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.336482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:51.170 [2024-12-13 18:24:25.336541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:29:51.170 [2024-12-13 18:24:25.336555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.338963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.170 [2024-12-13 18:24:25.339024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:51.170 [2024-12-13 18:24:25.339034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:29:51.170 [2024-12-13 18:24:25.339044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.170 [2024-12-13 18:24:25.339089] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:51.170 [2024-12-13 18:24:25.339107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:51.170 [2024-12-13 18:24:25.339346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.339994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:51.171 [2024-12-13 18:24:25.340013] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:51.171 [2024-12-13 18:24:25.340021] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1f80ed3-7057-4d48-a05a-a50f61545960 00:29:51.171 [2024-12-13 18:24:25.340032] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:51.171 [2024-12-13 18:24:25.340039] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:51.171 [2024-12-13 18:24:25.340049] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:51.171 [2024-12-13 18:24:25.340056] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:51.171 [2024-12-13 18:24:25.340066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:51.171 [2024-12-13 18:24:25.340084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:51.171 [2024-12-13 18:24:25.340093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:51.171 [2024-12-13 18:24:25.340100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:51.171 [2024-12-13 18:24:25.340108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:51.171 [2024-12-13 18:24:25.340115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.171 [2024-12-13 18:24:25.340125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:51.171 [2024-12-13 18:24:25.340133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:29:51.171 [2024-12-13 18:24:25.340146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.171 [2024-12-13 18:24:25.342593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.171 [2024-12-13 18:24:25.342632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:51.171 [2024-12-13 18:24:25.342642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.425 ms 00:29:51.171 [2024-12-13 18:24:25.342655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.171 [2024-12-13 18:24:25.342783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.171 [2024-12-13 18:24:25.342795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:51.171 [2024-12-13 18:24:25.342804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:51.172 [2024-12-13 18:24:25.342813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.350845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.350904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.172 [2024-12-13 18:24:25.350923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.350934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.350998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.351010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.172 [2024-12-13 18:24:25.351018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.351029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.351114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.351131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.172 [2024-12-13 18:24:25.351139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.351151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.351170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.351181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.172 [2024-12-13 18:24:25.351189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.351199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.365023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.365080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.172 [2024-12-13 18:24:25.365091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.365105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.375513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.375567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.172 [2024-12-13 18:24:25.375578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.375588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.375662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.375678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.172 [2024-12-13 18:24:25.375686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.375696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.375746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.375758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.172 [2024-12-13 18:24:25.375766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.375775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.375847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.375859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.172 [2024-12-13 18:24:25.375867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.375877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.375909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.375924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:51.172 [2024-12-13 18:24:25.375931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.375942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.375981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.375995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.172 [2024-12-13 18:24:25.376007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.376018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.376067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.172 [2024-12-13 18:24:25.376078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.172 [2024-12-13 18:24:25.376087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.172 [2024-12-13 18:24:25.376096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.172 [2024-12-13 18:24:25.376237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.499 ms, result 0 00:29:51.172 true 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96241 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96241 ']' 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96241 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96241 00:29:51.172 killing process with pid 96241 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96241' 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 96241 00:29:51.172 18:24:25 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 96241 00:29:56.463 18:24:30 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:59.764 262144+0 records in 00:29:59.764 262144+0 records out 00:29:59.764 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.533 s, 304 MB/s 00:29:59.764 18:24:34 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:01.678 18:24:35 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:01.678 [2024-12-13 18:24:35.704980] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:01.678 [2024-12-13 18:24:35.705271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96444 ] 00:30:01.678 [2024-12-13 18:24:35.851598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.678 [2024-12-13 18:24:35.879596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.678 [2024-12-13 18:24:35.994834] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:01.678 [2024-12-13 18:24:35.995198] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:01.941 [2024-12-13 18:24:36.157212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.157311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:01.941 [2024-12-13 18:24:36.157326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:01.941 [2024-12-13 18:24:36.157335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.157394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.157405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:01.941 [2024-12-13 18:24:36.157414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:01.941 [2024-12-13 18:24:36.157432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.157460] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:01.941 [2024-12-13 18:24:36.157826] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:01.941 [2024-12-13 18:24:36.157863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.157875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:01.941 [2024-12-13 18:24:36.157887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:30:01.941 [2024-12-13 18:24:36.157895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.159581] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:01.941 [2024-12-13 18:24:36.163258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.163305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:01.941 [2024-12-13 18:24:36.163316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:30:01.941 [2024-12-13 18:24:36.163335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.163411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.163424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:01.941 [2024-12-13 18:24:36.163437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:01.941 [2024-12-13 18:24:36.163448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.171277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.171315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:01.941 [2024-12-13 18:24:36.171336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.784 ms 00:30:01.941 [2024-12-13 18:24:36.171344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.171444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.171454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:01.941 [2024-12-13 18:24:36.171463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:01.941 [2024-12-13 18:24:36.171471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.171524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.171535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:01.941 [2024-12-13 18:24:36.171543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:01.941 [2024-12-13 18:24:36.171553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.171580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:01.941 [2024-12-13 18:24:36.173572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.173605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:01.941 [2024-12-13 18:24:36.173615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:30:01.941 [2024-12-13 18:24:36.173623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.173660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.173669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:01.941 [2024-12-13 18:24:36.173677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:01.941 [2024-12-13 18:24:36.173688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.173712] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:01.941 [2024-12-13 18:24:36.173736] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:01.941 [2024-12-13 18:24:36.173778] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:01.941 [2024-12-13 18:24:36.173793] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:01.941 [2024-12-13 18:24:36.173899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:01.941 [2024-12-13 18:24:36.173910] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:01.941 [2024-12-13 18:24:36.173923] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:01.941 [2024-12-13 18:24:36.173933] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:01.941 [2024-12-13 18:24:36.173943] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:01.941 [2024-12-13 18:24:36.173952] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:01.941 [2024-12-13 18:24:36.173960] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:01.941 [2024-12-13 18:24:36.173968] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:01.941 [2024-12-13 18:24:36.173976] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:01.941 [2024-12-13 18:24:36.173984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.173992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:01.941 [2024-12-13 18:24:36.174000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:30:01.941 [2024-12-13 18:24:36.174009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.174094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.941 [2024-12-13 18:24:36.174103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:01.941 [2024-12-13 18:24:36.174111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:01.941 [2024-12-13 18:24:36.174118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.941 [2024-12-13 18:24:36.174218] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:01.941 [2024-12-13 18:24:36.174230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:01.941 [2024-12-13 18:24:36.174239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:01.941 [2024-12-13 18:24:36.174269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.941 [2024-12-13 18:24:36.174279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:01.941 [2024-12-13 18:24:36.174286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:01.941 [2024-12-13 18:24:36.174294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:01.941 [2024-12-13 18:24:36.174303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:01.941 [2024-12-13 18:24:36.174311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:01.941 [2024-12-13 18:24:36.174319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:01.941 [2024-12-13 18:24:36.174327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:01.941 [2024-12-13 18:24:36.174338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:01.941 [2024-12-13 18:24:36.174346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:01.941 [2024-12-13 18:24:36.174354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:01.942 [2024-12-13 18:24:36.174363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:01.942 [2024-12-13 18:24:36.174372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:01.942 [2024-12-13 18:24:36.174388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:01.942 [2024-12-13 18:24:36.174396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:01.942 [2024-12-13 18:24:36.174413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.942 [2024-12-13 18:24:36.174429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:01.942 [2024-12-13 18:24:36.174436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.942 [2024-12-13 18:24:36.174452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:01.942 [2024-12-13 18:24:36.174460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.942 [2024-12-13 18:24:36.174480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:01.942 [2024-12-13 18:24:36.174488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:01.942 [2024-12-13 18:24:36.174503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:01.942 [2024-12-13 18:24:36.174511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:01.942 [2024-12-13 18:24:36.174526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:01.942 [2024-12-13 18:24:36.174533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:01.942 [2024-12-13 18:24:36.174541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:01.942 [2024-12-13 18:24:36.174549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:01.942 [2024-12-13 18:24:36.174564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:01.942 [2024-12-13 18:24:36.174571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:01.942 [2024-12-13 18:24:36.174587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:01.942 [2024-12-13 18:24:36.174595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174607] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:01.942 [2024-12-13 18:24:36.174617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:01.942 [2024-12-13 18:24:36.174624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:01.942 [2024-12-13 18:24:36.174635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:01.942 [2024-12-13 18:24:36.174643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:01.942 [2024-12-13 18:24:36.174651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:01.942 [2024-12-13 18:24:36.174658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:01.942 [2024-12-13 18:24:36.174666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:01.942 [2024-12-13 18:24:36.174673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:01.942 [2024-12-13 18:24:36.174680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:01.942 [2024-12-13 18:24:36.174688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:01.942 [2024-12-13 18:24:36.174701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:01.942 [2024-12-13 18:24:36.174714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:01.942 [2024-12-13 18:24:36.174721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:01.942 [2024-12-13 18:24:36.174728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:01.942 [2024-12-13 18:24:36.174734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:01.942 [2024-12-13 18:24:36.174744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:01.942 [2024-12-13 18:24:36.174751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:01.942 [2024-12-13 18:24:36.174758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:01.942 [2024-12-13 18:24:36.174765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:01.942 [2024-12-13 18:24:36.174772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:01.942 [2024-12-13 18:24:36.174785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:01.942 [2024-12-13 18:24:36.174792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:01.942 [2024-12-13 18:24:36.174799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:01.942 [2024-12-13 18:24:36.174807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:01.942 [2024-12-13 18:24:36.174814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:01.942 [2024-12-13 18:24:36.174821] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:01.942 [2024-12-13 18:24:36.174829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:01.942 [2024-12-13 18:24:36.174837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:01.942 [2024-12-13 18:24:36.174844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:01.942 [2024-12-13 18:24:36.174851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:01.942 [2024-12-13 18:24:36.174859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:01.942 [2024-12-13 18:24:36.174869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.174877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:01.942 [2024-12-13 18:24:36.174885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:30:01.942 [2024-12-13 18:24:36.174895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.188513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.188714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:01.942 [2024-12-13 18:24:36.188734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.574 ms 00:30:01.942 [2024-12-13 18:24:36.188749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.188846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.188857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:01.942 [2024-12-13 18:24:36.188865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:01.942 [2024-12-13 18:24:36.188877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.214511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.214603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:01.942 [2024-12-13 18:24:36.214628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.573 ms 00:30:01.942 [2024-12-13 18:24:36.214644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.214734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.214755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:01.942 [2024-12-13 18:24:36.214773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:01.942 [2024-12-13 18:24:36.214788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.215505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.215561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:01.942 [2024-12-13 18:24:36.215583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:30:01.942 [2024-12-13 18:24:36.215614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.215873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.215902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:01.942 [2024-12-13 18:24:36.215919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:30:01.942 [2024-12-13 18:24:36.215934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.224352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.224534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:01.942 [2024-12-13 18:24:36.224552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.382 ms 00:30:01.942 [2024-12-13 18:24:36.224561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.228300] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:01.942 [2024-12-13 18:24:36.228347] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:01.942 [2024-12-13 18:24:36.228360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.228369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:01.942 [2024-12-13 18:24:36.228378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:30:01.942 [2024-12-13 18:24:36.228385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.942 [2024-12-13 18:24:36.244168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.942 [2024-12-13 18:24:36.244359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:01.942 [2024-12-13 18:24:36.244380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.727 ms 00:30:01.943 [2024-12-13 18:24:36.244389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.247199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.247275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:01.943 [2024-12-13 18:24:36.247290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:30:01.943 [2024-12-13 18:24:36.247298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.249822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.249868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:01.943 [2024-12-13 18:24:36.249878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.472 ms 00:30:01.943 [2024-12-13 18:24:36.249886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.250290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.250307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:01.943 [2024-12-13 18:24:36.250317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:30:01.943 [2024-12-13 18:24:36.250325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.274848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.274914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:01.943 [2024-12-13 18:24:36.274926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.501 ms 00:30:01.943 [2024-12-13 18:24:36.274935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.283102] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:01.943 [2024-12-13 18:24:36.286457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.286504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:01.943 [2024-12-13 18:24:36.286516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.474 ms 00:30:01.943 [2024-12-13 18:24:36.286529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.286609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.286626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:01.943 [2024-12-13 18:24:36.286637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:01.943 [2024-12-13 18:24:36.286651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.286722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.286732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:01.943 [2024-12-13 18:24:36.286742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:01.943 [2024-12-13 18:24:36.286757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.286778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.286791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:01.943 [2024-12-13 18:24:36.286800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:01.943 [2024-12-13 18:24:36.286808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.286848] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:01.943 [2024-12-13 18:24:36.286860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.286868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:01.943 [2024-12-13 18:24:36.286876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:01.943 [2024-12-13 18:24:36.286884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.292780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.292829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:01.943 [2024-12-13 18:24:36.292840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.874 ms 00:30:01.943 [2024-12-13 18:24:36.292848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.292950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.943 [2024-12-13 18:24:36.292962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:01.943 [2024-12-13 18:24:36.292972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:01.943 [2024-12-13 18:24:36.292986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.943 [2024-12-13 18:24:36.294157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.482 ms, result 0 00:30:03.331  [2024-12-13T18:24:38.346Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-13T18:24:39.733Z] Copying: 40/1024 [MB] (28 MBps) [2024-12-13T18:24:40.674Z] Copying: 60/1024 [MB] (20 MBps) [2024-12-13T18:24:41.618Z] Copying: 73/1024 [MB] (12 MBps) [2024-12-13T18:24:42.563Z] Copying: 89/1024 [MB] (16 MBps) [2024-12-13T18:24:43.507Z] Copying: 100/1024 [MB] (10 MBps) [2024-12-13T18:24:44.450Z] Copying: 120/1024 [MB] (19 MBps) [2024-12-13T18:24:45.393Z] Copying: 133/1024 [MB] (12 MBps) [2024-12-13T18:24:46.337Z] Copying: 147/1024 [MB] (14 MBps) [2024-12-13T18:24:47.723Z] Copying: 163/1024 [MB] (16 MBps) [2024-12-13T18:24:48.667Z] Copying: 176/1024 [MB] (12 MBps) [2024-12-13T18:24:49.610Z] Copying: 194/1024 [MB] (17 MBps) [2024-12-13T18:24:50.552Z] Copying: 231/1024 [MB] (37 MBps) [2024-12-13T18:24:51.496Z] Copying: 243/1024 [MB] (11 MBps) [2024-12-13T18:24:52.440Z] Copying: 253/1024 [MB] (10 MBps) [2024-12-13T18:24:53.383Z] Copying: 263/1024 [MB] (10 MBps) [2024-12-13T18:24:54.328Z] Copying: 273/1024 [MB] (10 MBps) [2024-12-13T18:24:55.730Z] Copying: 292/1024 [MB] (18 MBps) [2024-12-13T18:24:56.671Z] Copying: 308/1024 [MB] (15 MBps) [2024-12-13T18:24:57.604Z] Copying: 321/1024 [MB] (12 MBps) [2024-12-13T18:24:58.536Z] Copying: 334/1024 [MB] (13 MBps) [2024-12-13T18:24:59.470Z] Copying: 349/1024 [MB] (14 MBps) [2024-12-13T18:25:00.403Z] Copying: 363/1024 [MB] (14 MBps) [2024-12-13T18:25:01.338Z] Copying: 376/1024 [MB] (13 MBps) [2024-12-13T18:25:02.708Z] Copying: 391/1024 [MB] (15 MBps) [2024-12-13T18:25:03.642Z] Copying: 406/1024 [MB] (14 MBps) [2024-12-13T18:25:04.597Z] Copying: 421/1024 [MB] (14 MBps) [2024-12-13T18:25:05.597Z] Copying: 434/1024 [MB] (13 MBps) [2024-12-13T18:25:06.539Z] Copying: 447/1024 [MB] (13 MBps) [2024-12-13T18:25:07.475Z] Copying: 458/1024 [MB] (10 MBps) [2024-12-13T18:25:08.408Z] Copying: 470/1024 [MB] (11 MBps) [2024-12-13T18:25:09.342Z] Copying: 483/1024 [MB] (13 MBps) [2024-12-13T18:25:10.716Z] Copying: 497/1024 [MB] (14 MBps) [2024-12-13T18:25:11.651Z] Copying: 512/1024 [MB] (14 MBps) [2024-12-13T18:25:12.591Z] Copying: 525/1024 [MB] (13 MBps) [2024-12-13T18:25:13.534Z] Copying: 543/1024 [MB] (18 MBps) [2024-12-13T18:25:14.474Z] Copying: 576/1024 [MB] (33 MBps) [2024-12-13T18:25:15.417Z] Copying: 607/1024 [MB] (31 MBps) [2024-12-13T18:25:16.359Z] Copying: 629/1024 [MB] (21 MBps) [2024-12-13T18:25:17.745Z] Copying: 656/1024 [MB] (27 MBps) [2024-12-13T18:25:18.317Z] Copying: 687/1024 [MB] (30 MBps) [2024-12-13T18:25:19.704Z] Copying: 710/1024 [MB] (22 MBps) [2024-12-13T18:25:20.648Z] Copying: 733/1024 [MB] (23 MBps) [2024-12-13T18:25:21.591Z] Copying: 747/1024 [MB] (13 MBps) [2024-12-13T18:25:22.533Z] Copying: 758/1024 [MB] (11 MBps) [2024-12-13T18:25:23.477Z] Copying: 776/1024 [MB] (17 MBps) [2024-12-13T18:25:24.421Z] Copying: 796/1024 [MB] (20 MBps) [2024-12-13T18:25:25.363Z] Copying: 814/1024 [MB] (17 MBps) [2024-12-13T18:25:26.749Z] Copying: 829/1024 [MB] (15 MBps) [2024-12-13T18:25:27.320Z] Copying: 860/1024 [MB] (30 MBps) [2024-12-13T18:25:28.707Z] Copying: 879/1024 [MB] (19 MBps) [2024-12-13T18:25:29.650Z] Copying: 907/1024 [MB] (27 MBps) [2024-12-13T18:25:30.638Z] Copying: 946/1024 [MB] (39 MBps) [2024-12-13T18:25:31.603Z] Copying: 985/1024 [MB] (39 MBps) [2024-12-13T18:25:31.603Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-13 18:25:31.280656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.226 [2024-12-13 18:25:31.280694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:57.226 [2024-12-13 18:25:31.280705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:57.226 [2024-12-13 18:25:31.280714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.226 [2024-12-13 18:25:31.280730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:57.226 [2024-12-13 18:25:31.281122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.226 [2024-12-13 18:25:31.281137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:57.226 [2024-12-13 18:25:31.281144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:30:57.226 [2024-12-13 18:25:31.281150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.226 [2024-12-13 18:25:31.282534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.226 [2024-12-13 18:25:31.282550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:57.226 [2024-12-13 18:25:31.282557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:30:57.226 [2024-12-13 18:25:31.282564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.226 [2024-12-13 18:25:31.282585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.226 [2024-12-13 18:25:31.282591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:57.226 [2024-12-13 18:25:31.282598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:57.226 [2024-12-13 18:25:31.282603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.226 [2024-12-13 18:25:31.282638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.226 [2024-12-13 18:25:31.282644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:57.226 [2024-12-13 18:25:31.282650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:57.226 [2024-12-13 18:25:31.282656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.226 [2024-12-13 18:25:31.282666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:57.226 [2024-12-13 18:25:31.282677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:57.226 [2024-12-13 18:25:31.282755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.282995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:57.227 [2024-12-13 18:25:31.283279] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:57.227 [2024-12-13 18:25:31.283285] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1f80ed3-7057-4d48-a05a-a50f61545960 00:30:57.228 [2024-12-13 18:25:31.283292] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:57.228 [2024-12-13 18:25:31.283297] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:57.228 [2024-12-13 18:25:31.283303] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:57.228 [2024-12-13 18:25:31.283308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:57.228 [2024-12-13 18:25:31.283314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:57.228 [2024-12-13 18:25:31.283320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:57.228 [2024-12-13 18:25:31.283325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:57.228 [2024-12-13 18:25:31.283331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:57.228 [2024-12-13 18:25:31.283336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:57.228 [2024-12-13 18:25:31.283341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.228 [2024-12-13 18:25:31.283346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:57.228 [2024-12-13 18:25:31.283352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:30:57.228 [2024-12-13 18:25:31.283360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.284581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.228 [2024-12-13 18:25:31.284595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:57.228 [2024-12-13 18:25:31.284602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:30:57.228 [2024-12-13 18:25:31.284613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.284680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.228 [2024-12-13 18:25:31.284689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:57.228 [2024-12-13 18:25:31.284695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:57.228 [2024-12-13 18:25:31.284701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.288824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.288953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.228 [2024-12-13 18:25:31.288971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.288977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.289020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.289029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.228 [2024-12-13 18:25:31.289036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.289046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.289069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.289076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.228 [2024-12-13 18:25:31.289082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.289087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.289098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.289104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.228 [2024-12-13 18:25:31.289112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.289117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.296820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.296918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.228 [2024-12-13 18:25:31.296976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.296994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.303035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.303154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.228 [2024-12-13 18:25:31.303203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.303221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.303277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.303331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.228 [2024-12-13 18:25:31.303349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.303363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.303392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.303413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.228 [2024-12-13 18:25:31.303501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.303522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.303571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.303589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.228 [2024-12-13 18:25:31.303635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.303652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.303682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.303704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:57.228 [2024-12-13 18:25:31.303720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.303753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.303796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.303815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.228 [2024-12-13 18:25:31.303830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.303851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.303892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.228 [2024-12-13 18:25:31.303944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.228 [2024-12-13 18:25:31.303959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.228 [2024-12-13 18:25:31.303974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.228 [2024-12-13 18:25:31.304079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 23.395 ms, result 0 00:30:57.800 00:30:57.800 00:30:57.800 18:25:31 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:57.800 [2024-12-13 18:25:31.981674] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:57.800 [2024-12-13 18:25:31.982332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97011 ] 00:30:57.800 [2024-12-13 18:25:32.126483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.800 [2024-12-13 18:25:32.150738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.062 [2024-12-13 18:25:32.238315] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.062 [2024-12-13 18:25:32.238531] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.062 [2024-12-13 18:25:32.384852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.384976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:58.062 [2024-12-13 18:25:32.385032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.062 [2024-12-13 18:25:32.385051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.385103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.385122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.062 [2024-12-13 18:25:32.385138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:58.062 [2024-12-13 18:25:32.385157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.385188] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:58.062 [2024-12-13 18:25:32.385636] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:58.062 [2024-12-13 18:25:32.385752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.385802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.062 [2024-12-13 18:25:32.385823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:30:58.062 [2024-12-13 18:25:32.385841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.386078] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:58.062 [2024-12-13 18:25:32.386113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.386129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:58.062 [2024-12-13 18:25:32.386180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:58.062 [2024-12-13 18:25:32.386200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.386260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.386279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:58.062 [2024-12-13 18:25:32.386325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:58.062 [2024-12-13 18:25:32.386342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.386579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.386640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.062 [2024-12-13 18:25:32.386676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:30:58.062 [2024-12-13 18:25:32.386692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.386766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.386839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.062 [2024-12-13 18:25:32.386889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:58.062 [2024-12-13 18:25:32.386903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.386929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.386946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:58.062 [2024-12-13 18:25:32.386961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:58.062 [2024-12-13 18:25:32.386974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.386999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:58.062 [2024-12-13 18:25:32.388280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.388361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.062 [2024-12-13 18:25:32.388407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:30:58.062 [2024-12-13 18:25:32.388427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.388465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.388481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:58.062 [2024-12-13 18:25:32.388525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:58.062 [2024-12-13 18:25:32.388541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.388570] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:58.062 [2024-12-13 18:25:32.388597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:58.062 [2024-12-13 18:25:32.388663] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:58.062 [2024-12-13 18:25:32.388717] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:58.062 [2024-12-13 18:25:32.388831] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:58.062 [2024-12-13 18:25:32.388883] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:58.062 [2024-12-13 18:25:32.388944] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:58.062 [2024-12-13 18:25:32.388968] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:58.062 [2024-12-13 18:25:32.388995] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:58.062 [2024-12-13 18:25:32.389018] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:58.062 [2024-12-13 18:25:32.389032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:58.062 [2024-12-13 18:25:32.389073] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:58.062 [2024-12-13 18:25:32.389089] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:58.062 [2024-12-13 18:25:32.389108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.389122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:58.062 [2024-12-13 18:25:32.389137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:30:58.062 [2024-12-13 18:25:32.389151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.062 [2024-12-13 18:25:32.389227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.062 [2024-12-13 18:25:32.389263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:58.062 [2024-12-13 18:25:32.389284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:58.062 [2024-12-13 18:25:32.389317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.063 [2024-12-13 18:25:32.389423] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:58.063 [2024-12-13 18:25:32.389492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:58.063 [2024-12-13 18:25:32.389508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:58.063 [2024-12-13 18:25:32.389588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:58.063 [2024-12-13 18:25:32.389604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.063 [2024-12-13 18:25:32.389614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:58.063 [2024-12-13 18:25:32.389619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:58.063 [2024-12-13 18:25:32.389624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.063 [2024-12-13 18:25:32.389629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:58.063 [2024-12-13 18:25:32.389634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:58.063 [2024-12-13 18:25:32.389639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:58.063 [2024-12-13 18:25:32.389649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:58.063 [2024-12-13 18:25:32.389666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:58.063 [2024-12-13 18:25:32.389680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:58.063 [2024-12-13 18:25:32.389696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:58.063 [2024-12-13 18:25:32.389711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:58.063 [2024-12-13 18:25:32.389725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.063 [2024-12-13 18:25:32.389735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:58.063 [2024-12-13 18:25:32.389743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:58.063 [2024-12-13 18:25:32.389748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.063 [2024-12-13 18:25:32.389753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:58.063 [2024-12-13 18:25:32.389758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:58.063 [2024-12-13 18:25:32.389762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:58.063 [2024-12-13 18:25:32.389773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:58.063 [2024-12-13 18:25:32.389778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389782] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:58.063 [2024-12-13 18:25:32.389788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:58.063 [2024-12-13 18:25:32.389793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.063 [2024-12-13 18:25:32.389810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:58.063 [2024-12-13 18:25:32.389815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:58.063 [2024-12-13 18:25:32.389820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:58.063 [2024-12-13 18:25:32.389825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:58.063 [2024-12-13 18:25:32.389831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:58.063 [2024-12-13 18:25:32.389836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:58.063 [2024-12-13 18:25:32.389843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:58.063 [2024-12-13 18:25:32.389850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.063 [2024-12-13 18:25:32.389858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:58.063 [2024-12-13 18:25:32.389865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:58.063 [2024-12-13 18:25:32.389871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:58.063 [2024-12-13 18:25:32.389876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:58.063 [2024-12-13 18:25:32.389882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:58.063 [2024-12-13 18:25:32.389887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:58.063 [2024-12-13 18:25:32.389893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:58.063 [2024-12-13 18:25:32.389898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:58.063 [2024-12-13 18:25:32.389903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:58.063 [2024-12-13 18:25:32.389909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:58.063 [2024-12-13 18:25:32.389914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:58.063 [2024-12-13 18:25:32.389923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:58.063 [2024-12-13 18:25:32.389930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:58.063 [2024-12-13 18:25:32.389936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:58.063 [2024-12-13 18:25:32.389942] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:58.063 [2024-12-13 18:25:32.389948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.063 [2024-12-13 18:25:32.389954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:58.063 [2024-12-13 18:25:32.389959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:58.063 [2024-12-13 18:25:32.389964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:58.063 [2024-12-13 18:25:32.389969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:58.063 [2024-12-13 18:25:32.389975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.063 [2024-12-13 18:25:32.389981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:58.063 [2024-12-13 18:25:32.389989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:30:58.063 [2024-12-13 18:25:32.389995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.063 [2024-12-13 18:25:32.395394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.063 [2024-12-13 18:25:32.395423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.063 [2024-12-13 18:25:32.395430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.367 ms 00:30:58.063 [2024-12-13 18:25:32.395435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.063 [2024-12-13 18:25:32.395493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.063 [2024-12-13 18:25:32.395499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:58.063 [2024-12-13 18:25:32.395505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:58.063 [2024-12-13 18:25:32.395511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.063 [2024-12-13 18:25:32.411116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.063 [2024-12-13 18:25:32.411227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.063 [2024-12-13 18:25:32.411255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.573 ms 00:30:58.063 [2024-12-13 18:25:32.411262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.063 [2024-12-13 18:25:32.411286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.063 [2024-12-13 18:25:32.411293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.063 [2024-12-13 18:25:32.411300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:58.063 [2024-12-13 18:25:32.411305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.411388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.411399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.064 [2024-12-13 18:25:32.411405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:58.064 [2024-12-13 18:25:32.411413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.411497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.411504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.064 [2024-12-13 18:25:32.411510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:58.064 [2024-12-13 18:25:32.411517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.416313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.416346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.064 [2024-12-13 18:25:32.416366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:30:58.064 [2024-12-13 18:25:32.416375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.416484] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:58.064 [2024-12-13 18:25:32.416498] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:58.064 [2024-12-13 18:25:32.416508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.416517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:58.064 [2024-12-13 18:25:32.416527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:58.064 [2024-12-13 18:25:32.416538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.429316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.429405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:58.064 [2024-12-13 18:25:32.429417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.762 ms 00:30:58.064 [2024-12-13 18:25:32.429423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.429514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.429521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:58.064 [2024-12-13 18:25:32.429530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:58.064 [2024-12-13 18:25:32.429537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.429572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.429582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:58.064 [2024-12-13 18:25:32.429588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:58.064 [2024-12-13 18:25:32.429593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.429814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.429822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:58.064 [2024-12-13 18:25:32.429835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:30:58.064 [2024-12-13 18:25:32.429840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.064 [2024-12-13 18:25:32.429853] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:58.064 [2024-12-13 18:25:32.429860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.064 [2024-12-13 18:25:32.429868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:58.064 [2024-12-13 18:25:32.429873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:58.064 [2024-12-13 18:25:32.429879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.436063] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:58.326 [2024-12-13 18:25:32.436168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.326 [2024-12-13 18:25:32.436179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:58.326 [2024-12-13 18:25:32.436186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.276 ms 00:30:58.326 [2024-12-13 18:25:32.436191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.437938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.326 [2024-12-13 18:25:32.437960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:58.326 [2024-12-13 18:25:32.437967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:30:58.326 [2024-12-13 18:25:32.437973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.438029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.326 [2024-12-13 18:25:32.438037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:58.326 [2024-12-13 18:25:32.438045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:58.326 [2024-12-13 18:25:32.438051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.438068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.326 [2024-12-13 18:25:32.438074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:58.326 [2024-12-13 18:25:32.438079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.326 [2024-12-13 18:25:32.438084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.438107] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:58.326 [2024-12-13 18:25:32.438113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.326 [2024-12-13 18:25:32.438119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:58.326 [2024-12-13 18:25:32.438126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:58.326 [2024-12-13 18:25:32.438131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.441312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.326 [2024-12-13 18:25:32.441338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:58.326 [2024-12-13 18:25:32.441347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:30:58.326 [2024-12-13 18:25:32.441353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.441403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.326 [2024-12-13 18:25:32.441416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:58.326 [2024-12-13 18:25:32.441423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:58.326 [2024-12-13 18:25:32.441429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.326 [2024-12-13 18:25:32.442082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 56.938 ms, result 0 00:30:59.270  [2024-12-13T18:25:34.591Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-13T18:25:35.980Z] Copying: 39/1024 [MB] (19 MBps) [2024-12-13T18:25:36.923Z] Copying: 54/1024 [MB] (14 MBps) [2024-12-13T18:25:37.867Z] Copying: 74/1024 [MB] (20 MBps) [2024-12-13T18:25:38.810Z] Copying: 92/1024 [MB] (17 MBps) [2024-12-13T18:25:39.754Z] Copying: 109/1024 [MB] (17 MBps) [2024-12-13T18:25:40.695Z] Copying: 128/1024 [MB] (18 MBps) [2024-12-13T18:25:41.637Z] Copying: 145/1024 [MB] (17 MBps) [2024-12-13T18:25:42.582Z] Copying: 163/1024 [MB] (17 MBps) [2024-12-13T18:25:43.970Z] Copying: 186/1024 [MB] (23 MBps) [2024-12-13T18:25:44.913Z] Copying: 203/1024 [MB] (16 MBps) [2024-12-13T18:25:45.857Z] Copying: 214/1024 [MB] (11 MBps) [2024-12-13T18:25:46.801Z] Copying: 224/1024 [MB] (10 MBps) [2024-12-13T18:25:47.746Z] Copying: 235/1024 [MB] (10 MBps) [2024-12-13T18:25:48.690Z] Copying: 246/1024 [MB] (10 MBps) [2024-12-13T18:25:49.634Z] Copying: 256/1024 [MB] (10 MBps) [2024-12-13T18:25:50.578Z] Copying: 267/1024 [MB] (10 MBps) [2024-12-13T18:25:51.969Z] Copying: 278/1024 [MB] (11 MBps) [2024-12-13T18:25:52.913Z] Copying: 289/1024 [MB] (11 MBps) [2024-12-13T18:25:53.857Z] Copying: 301/1024 [MB] (11 MBps) [2024-12-13T18:25:54.801Z] Copying: 314/1024 [MB] (13 MBps) [2024-12-13T18:25:55.743Z] Copying: 325/1024 [MB] (10 MBps) [2024-12-13T18:25:56.714Z] Copying: 336/1024 [MB] (10 MBps) [2024-12-13T18:25:57.657Z] Copying: 346/1024 [MB] (10 MBps) [2024-12-13T18:25:58.599Z] Copying: 357/1024 [MB] (10 MBps) [2024-12-13T18:25:59.986Z] Copying: 373/1024 [MB] (16 MBps) [2024-12-13T18:26:00.928Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-13T18:26:01.871Z] Copying: 395/1024 [MB] (10 MBps) [2024-12-13T18:26:02.813Z] Copying: 418/1024 [MB] (23 MBps) [2024-12-13T18:26:03.758Z] Copying: 436/1024 [MB] (17 MBps) [2024-12-13T18:26:04.702Z] Copying: 458/1024 [MB] (21 MBps) [2024-12-13T18:26:05.643Z] Copying: 477/1024 [MB] (19 MBps) [2024-12-13T18:26:06.587Z] Copying: 497/1024 [MB] (20 MBps) [2024-12-13T18:26:07.975Z] Copying: 512/1024 [MB] (15 MBps) [2024-12-13T18:26:08.920Z] Copying: 531/1024 [MB] (18 MBps) [2024-12-13T18:26:09.862Z] Copying: 552/1024 [MB] (21 MBps) [2024-12-13T18:26:10.805Z] Copying: 576/1024 [MB] (23 MBps) [2024-12-13T18:26:11.748Z] Copying: 589/1024 [MB] (13 MBps) [2024-12-13T18:26:12.691Z] Copying: 600/1024 [MB] (10 MBps) [2024-12-13T18:26:13.633Z] Copying: 610/1024 [MB] (10 MBps) [2024-12-13T18:26:15.020Z] Copying: 621/1024 [MB] (10 MBps) [2024-12-13T18:26:15.593Z] Copying: 632/1024 [MB] (10 MBps) [2024-12-13T18:26:16.979Z] Copying: 643/1024 [MB] (10 MBps) [2024-12-13T18:26:17.922Z] Copying: 663/1024 [MB] (20 MBps) [2024-12-13T18:26:18.865Z] Copying: 674/1024 [MB] (10 MBps) [2024-12-13T18:26:19.809Z] Copying: 686/1024 [MB] (12 MBps) [2024-12-13T18:26:20.752Z] Copying: 705/1024 [MB] (19 MBps) [2024-12-13T18:26:21.696Z] Copying: 724/1024 [MB] (19 MBps) [2024-12-13T18:26:22.707Z] Copying: 735/1024 [MB] (11 MBps) [2024-12-13T18:26:23.672Z] Copying: 746/1024 [MB] (11 MBps) [2024-12-13T18:26:24.616Z] Copying: 761/1024 [MB] (15 MBps) [2024-12-13T18:26:25.996Z] Copying: 774/1024 [MB] (12 MBps) [2024-12-13T18:26:26.940Z] Copying: 794/1024 [MB] (19 MBps) [2024-12-13T18:26:27.877Z] Copying: 816/1024 [MB] (22 MBps) [2024-12-13T18:26:28.822Z] Copying: 835/1024 [MB] (18 MBps) [2024-12-13T18:26:29.762Z] Copying: 852/1024 [MB] (17 MBps) [2024-12-13T18:26:30.706Z] Copying: 868/1024 [MB] (15 MBps) [2024-12-13T18:26:31.647Z] Copying: 884/1024 [MB] (16 MBps) [2024-12-13T18:26:32.591Z] Copying: 904/1024 [MB] (20 MBps) [2024-12-13T18:26:33.978Z] Copying: 929/1024 [MB] (24 MBps) [2024-12-13T18:26:34.922Z] Copying: 940/1024 [MB] (10 MBps) [2024-12-13T18:26:35.865Z] Copying: 954/1024 [MB] (14 MBps) [2024-12-13T18:26:36.807Z] Copying: 970/1024 [MB] (16 MBps) [2024-12-13T18:26:37.751Z] Copying: 992/1024 [MB] (21 MBps) [2024-12-13T18:26:38.323Z] Copying: 1009/1024 [MB] (17 MBps) [2024-12-13T18:26:38.897Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-13 18:26:38.632297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.520 [2024-12-13 18:26:38.632386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.520 [2024-12-13 18:26:38.632403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:04.520 [2024-12-13 18:26:38.632413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.520 [2024-12-13 18:26:38.632445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.520 [2024-12-13 18:26:38.633226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.520 [2024-12-13 18:26:38.633289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.520 [2024-12-13 18:26:38.633302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:32:04.520 [2024-12-13 18:26:38.633312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.520 [2024-12-13 18:26:38.633799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.520 [2024-12-13 18:26:38.633819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.520 [2024-12-13 18:26:38.633829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:32:04.520 [2024-12-13 18:26:38.633837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.520 [2024-12-13 18:26:38.633878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.520 [2024-12-13 18:26:38.633888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:04.520 [2024-12-13 18:26:38.633901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:04.520 [2024-12-13 18:26:38.633909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.520 [2024-12-13 18:26:38.633973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.520 [2024-12-13 18:26:38.633990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:04.520 [2024-12-13 18:26:38.633999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:04.520 [2024-12-13 18:26:38.634008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.520 [2024-12-13 18:26:38.634024] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.520 [2024-12-13 18:26:38.634037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.520 [2024-12-13 18:26:38.634387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.521 [2024-12-13 18:26:38.634850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.521 [2024-12-13 18:26:38.634859] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1f80ed3-7057-4d48-a05a-a50f61545960 00:32:04.521 [2024-12-13 18:26:38.634868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:04.521 [2024-12-13 18:26:38.634877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:04.521 [2024-12-13 18:26:38.634884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:04.521 [2024-12-13 18:26:38.634893] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:04.521 [2024-12-13 18:26:38.634904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.521 [2024-12-13 18:26:38.634912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.521 [2024-12-13 18:26:38.634920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.521 [2024-12-13 18:26:38.634926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.521 [2024-12-13 18:26:38.634933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.521 [2024-12-13 18:26:38.634940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.521 [2024-12-13 18:26:38.634948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.521 [2024-12-13 18:26:38.634956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:32:04.521 [2024-12-13 18:26:38.634966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.521 [2024-12-13 18:26:38.638193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.521 [2024-12-13 18:26:38.638387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.521 [2024-12-13 18:26:38.638452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.075 ms 00:32:04.521 [2024-12-13 18:26:38.638481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.521 [2024-12-13 18:26:38.638623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.521 [2024-12-13 18:26:38.638661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.521 [2024-12-13 18:26:38.638942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:32:04.521 [2024-12-13 18:26:38.639120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.521 [2024-12-13 18:26:38.647289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.521 [2024-12-13 18:26:38.647451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.521 [2024-12-13 18:26:38.647507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.521 [2024-12-13 18:26:38.647530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.521 [2024-12-13 18:26:38.647615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.521 [2024-12-13 18:26:38.647638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.521 [2024-12-13 18:26:38.647666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.521 [2024-12-13 18:26:38.647685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.521 [2024-12-13 18:26:38.647772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.521 [2024-12-13 18:26:38.647821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.521 [2024-12-13 18:26:38.647842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.521 [2024-12-13 18:26:38.647862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.521 [2024-12-13 18:26:38.647891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.521 [2024-12-13 18:26:38.647920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.521 [2024-12-13 18:26:38.647940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.521 [2024-12-13 18:26:38.648005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.521 [2024-12-13 18:26:38.663865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.521 [2024-12-13 18:26:38.663923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.522 [2024-12-13 18:26:38.663935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.663951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.522 [2024-12-13 18:26:38.677066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.522 [2024-12-13 18:26:38.677088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.677100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.522 [2024-12-13 18:26:38.677169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.522 [2024-12-13 18:26:38.677177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.677186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.522 [2024-12-13 18:26:38.677234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.522 [2024-12-13 18:26:38.677263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.677271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.522 [2024-12-13 18:26:38.677350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.522 [2024-12-13 18:26:38.677358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.677367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.522 [2024-12-13 18:26:38.677402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.522 [2024-12-13 18:26:38.677412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.677423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.522 [2024-12-13 18:26:38.677506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.522 [2024-12-13 18:26:38.677515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.677523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.522 [2024-12-13 18:26:38.677579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.522 [2024-12-13 18:26:38.677587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.522 [2024-12-13 18:26:38.677595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.522 [2024-12-13 18:26:38.677737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 45.407 ms, result 0 00:32:04.522 00:32:04.522 00:32:04.522 18:26:38 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:07.071 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:07.071 18:26:41 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:07.071 [2024-12-13 18:26:41.195991] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:07.071 [2024-12-13 18:26:41.196380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97712 ] 00:32:07.071 [2024-12-13 18:26:41.342037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.071 [2024-12-13 18:26:41.370890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.333 [2024-12-13 18:26:41.488208] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.333 [2024-12-13 18:26:41.488312] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.333 [2024-12-13 18:26:41.650555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.650610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:07.333 [2024-12-13 18:26:41.650625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.333 [2024-12-13 18:26:41.650637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.650698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.650710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:07.333 [2024-12-13 18:26:41.650723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:07.333 [2024-12-13 18:26:41.650737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.650765] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:07.333 [2024-12-13 18:26:41.651090] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:07.333 [2024-12-13 18:26:41.651127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.651139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:07.333 [2024-12-13 18:26:41.651151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:32:07.333 [2024-12-13 18:26:41.651159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.651477] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:07.333 [2024-12-13 18:26:41.651506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.651516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:07.333 [2024-12-13 18:26:41.651525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:07.333 [2024-12-13 18:26:41.651539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.651629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.651642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:07.333 [2024-12-13 18:26:41.651652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:07.333 [2024-12-13 18:26:41.651660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.651906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.651927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:07.333 [2024-12-13 18:26:41.651937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:32:07.333 [2024-12-13 18:26:41.651945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.652037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.652054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:07.333 [2024-12-13 18:26:41.652064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:07.333 [2024-12-13 18:26:41.652071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.652095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.652103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:07.333 [2024-12-13 18:26:41.652116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:07.333 [2024-12-13 18:26:41.652123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.652148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:07.333 [2024-12-13 18:26:41.654386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.654431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:07.333 [2024-12-13 18:26:41.654441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.242 ms 00:32:07.333 [2024-12-13 18:26:41.654448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.654484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.654493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:07.333 [2024-12-13 18:26:41.654501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:07.333 [2024-12-13 18:26:41.654508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.654555] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:07.333 [2024-12-13 18:26:41.654583] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:07.333 [2024-12-13 18:26:41.654624] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:07.333 [2024-12-13 18:26:41.654640] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:07.333 [2024-12-13 18:26:41.654746] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:07.333 [2024-12-13 18:26:41.654757] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:07.333 [2024-12-13 18:26:41.654768] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:07.333 [2024-12-13 18:26:41.654778] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:07.333 [2024-12-13 18:26:41.654793] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:07.333 [2024-12-13 18:26:41.654802] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:07.333 [2024-12-13 18:26:41.654809] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:07.333 [2024-12-13 18:26:41.654821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:07.333 [2024-12-13 18:26:41.654829] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:07.333 [2024-12-13 18:26:41.654836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.654845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:07.333 [2024-12-13 18:26:41.654852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:32:07.333 [2024-12-13 18:26:41.654864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.654946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.333 [2024-12-13 18:26:41.654955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:07.333 [2024-12-13 18:26:41.654969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:07.333 [2024-12-13 18:26:41.654976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.333 [2024-12-13 18:26:41.655082] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:07.333 [2024-12-13 18:26:41.655093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:07.333 [2024-12-13 18:26:41.655107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.333 [2024-12-13 18:26:41.655117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:07.333 [2024-12-13 18:26:41.655140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:07.333 [2024-12-13 18:26:41.655157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:07.333 [2024-12-13 18:26:41.655165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.333 [2024-12-13 18:26:41.655181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:07.333 [2024-12-13 18:26:41.655190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:07.333 [2024-12-13 18:26:41.655198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.333 [2024-12-13 18:26:41.655205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:07.333 [2024-12-13 18:26:41.655213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:07.333 [2024-12-13 18:26:41.655221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:07.333 [2024-12-13 18:26:41.655238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:07.333 [2024-12-13 18:26:41.655280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:07.333 [2024-12-13 18:26:41.655297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.333 [2024-12-13 18:26:41.655312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:07.333 [2024-12-13 18:26:41.655320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.333 [2024-12-13 18:26:41.655336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:07.333 [2024-12-13 18:26:41.655344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:07.333 [2024-12-13 18:26:41.655352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.333 [2024-12-13 18:26:41.655359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:07.334 [2024-12-13 18:26:41.655368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:07.334 [2024-12-13 18:26:41.655376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.334 [2024-12-13 18:26:41.655384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:07.334 [2024-12-13 18:26:41.655394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:07.334 [2024-12-13 18:26:41.655402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.334 [2024-12-13 18:26:41.655413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:07.334 [2024-12-13 18:26:41.655421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:07.334 [2024-12-13 18:26:41.655428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.334 [2024-12-13 18:26:41.655434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:07.334 [2024-12-13 18:26:41.655442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:07.334 [2024-12-13 18:26:41.655448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.334 [2024-12-13 18:26:41.655455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:07.334 [2024-12-13 18:26:41.655462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:07.334 [2024-12-13 18:26:41.655469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.334 [2024-12-13 18:26:41.655476] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:07.334 [2024-12-13 18:26:41.655483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:07.334 [2024-12-13 18:26:41.655491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.334 [2024-12-13 18:26:41.655501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.334 [2024-12-13 18:26:41.655508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:07.334 [2024-12-13 18:26:41.655515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:07.334 [2024-12-13 18:26:41.655523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:07.334 [2024-12-13 18:26:41.655532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:07.334 [2024-12-13 18:26:41.655539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:07.334 [2024-12-13 18:26:41.655546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:07.334 [2024-12-13 18:26:41.655554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:07.334 [2024-12-13 18:26:41.655563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.334 [2024-12-13 18:26:41.655572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:07.334 [2024-12-13 18:26:41.655579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:07.334 [2024-12-13 18:26:41.655586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:07.334 [2024-12-13 18:26:41.655594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:07.334 [2024-12-13 18:26:41.655600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:07.334 [2024-12-13 18:26:41.655607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:07.334 [2024-12-13 18:26:41.655615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:07.334 [2024-12-13 18:26:41.655621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:07.334 [2024-12-13 18:26:41.655628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:07.334 [2024-12-13 18:26:41.655636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:07.334 [2024-12-13 18:26:41.655643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:07.334 [2024-12-13 18:26:41.655660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:07.334 [2024-12-13 18:26:41.655667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:07.334 [2024-12-13 18:26:41.655675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:07.334 [2024-12-13 18:26:41.655683] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:07.334 [2024-12-13 18:26:41.655695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.334 [2024-12-13 18:26:41.655704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:07.334 [2024-12-13 18:26:41.655712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:07.334 [2024-12-13 18:26:41.655720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:07.334 [2024-12-13 18:26:41.655727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:07.334 [2024-12-13 18:26:41.655735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.655742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:07.334 [2024-12-13 18:26:41.655750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:32:07.334 [2024-12-13 18:26:41.655757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.665817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.666023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:07.334 [2024-12-13 18:26:41.666043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.017 ms 00:32:07.334 [2024-12-13 18:26:41.666052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.666139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.666149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:07.334 [2024-12-13 18:26:41.666158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:07.334 [2024-12-13 18:26:41.666165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.685712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.685914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:07.334 [2024-12-13 18:26:41.685993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.481 ms 00:32:07.334 [2024-12-13 18:26:41.686023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.686095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.686126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:07.334 [2024-12-13 18:26:41.686153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.334 [2024-12-13 18:26:41.686176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.686340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.686397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:07.334 [2024-12-13 18:26:41.686502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:07.334 [2024-12-13 18:26:41.686530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.686714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.686751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:07.334 [2024-12-13 18:26:41.686780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:32:07.334 [2024-12-13 18:26:41.686804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.695435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.695608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:07.334 [2024-12-13 18:26:41.695683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.592 ms 00:32:07.334 [2024-12-13 18:26:41.695710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.334 [2024-12-13 18:26:41.695868] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:07.334 [2024-12-13 18:26:41.695916] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:07.334 [2024-12-13 18:26:41.695964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.334 [2024-12-13 18:26:41.695990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:07.334 [2024-12-13 18:26:41.696020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:32:07.334 [2024-12-13 18:26:41.696108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.708952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.709105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:07.596 [2024-12-13 18:26:41.709165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:32:07.596 [2024-12-13 18:26:41.709195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.709383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.709579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:07.596 [2024-12-13 18:26:41.709758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:32:07.596 [2024-12-13 18:26:41.709808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.709888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.709992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:07.596 [2024-12-13 18:26:41.710017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:07.596 [2024-12-13 18:26:41.710043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.710519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.710642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:07.596 [2024-12-13 18:26:41.710708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:32:07.596 [2024-12-13 18:26:41.710732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.710766] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:07.596 [2024-12-13 18:26:41.710798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.710824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:07.596 [2024-12-13 18:26:41.710843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:07.596 [2024-12-13 18:26:41.710868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.720579] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:07.596 [2024-12-13 18:26:41.720846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.720878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:07.596 [2024-12-13 18:26:41.720982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.945 ms 00:32:07.596 [2024-12-13 18:26:41.721006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.723529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.723661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:07.596 [2024-12-13 18:26:41.723713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.474 ms 00:32:07.596 [2024-12-13 18:26:41.723739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.723849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.723877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:07.596 [2024-12-13 18:26:41.723897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:07.596 [2024-12-13 18:26:41.723916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.723957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.724035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:07.596 [2024-12-13 18:26:41.724059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.596 [2024-12-13 18:26:41.724078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.724135] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:07.596 [2024-12-13 18:26:41.724160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.724179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:07.596 [2024-12-13 18:26:41.724205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:07.596 [2024-12-13 18:26:41.724285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.730710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.730884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:07.596 [2024-12-13 18:26:41.730939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.345 ms 00:32:07.596 [2024-12-13 18:26:41.730972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.731379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.596 [2024-12-13 18:26:41.731737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:07.596 [2024-12-13 18:26:41.731926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:32:07.596 [2024-12-13 18:26:41.732006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.596 [2024-12-13 18:26:41.734796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.297 ms, result 0 00:32:08.538  [2024-12-13T18:26:43.860Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-13T18:26:44.804Z] Copying: 31/1024 [MB] (13 MBps) [2024-12-13T18:26:45.747Z] Copying: 42148/1048576 [kB] (10160 kBps) [2024-12-13T18:26:47.135Z] Copying: 51/1024 [MB] (10 MBps) [2024-12-13T18:26:48.077Z] Copying: 68/1024 [MB] (17 MBps) [2024-12-13T18:26:49.060Z] Copying: 108/1024 [MB] (39 MBps) [2024-12-13T18:26:50.006Z] Copying: 137/1024 [MB] (29 MBps) [2024-12-13T18:26:50.949Z] Copying: 147/1024 [MB] (10 MBps) [2024-12-13T18:26:51.892Z] Copying: 180/1024 [MB] (32 MBps) [2024-12-13T18:26:52.835Z] Copying: 206/1024 [MB] (25 MBps) [2024-12-13T18:26:53.780Z] Copying: 225/1024 [MB] (18 MBps) [2024-12-13T18:26:55.167Z] Copying: 239/1024 [MB] (13 MBps) [2024-12-13T18:26:56.107Z] Copying: 252/1024 [MB] (13 MBps) [2024-12-13T18:26:57.051Z] Copying: 281/1024 [MB] (28 MBps) [2024-12-13T18:26:57.994Z] Copying: 319/1024 [MB] (38 MBps) [2024-12-13T18:26:58.937Z] Copying: 355/1024 [MB] (35 MBps) [2024-12-13T18:26:59.881Z] Copying: 367/1024 [MB] (11 MBps) [2024-12-13T18:27:00.825Z] Copying: 402/1024 [MB] (34 MBps) [2024-12-13T18:27:01.770Z] Copying: 412/1024 [MB] (10 MBps) [2024-12-13T18:27:03.156Z] Copying: 427/1024 [MB] (15 MBps) [2024-12-13T18:27:04.099Z] Copying: 449/1024 [MB] (21 MBps) [2024-12-13T18:27:05.042Z] Copying: 480/1024 [MB] (31 MBps) [2024-12-13T18:27:05.990Z] Copying: 500/1024 [MB] (19 MBps) [2024-12-13T18:27:06.934Z] Copying: 521/1024 [MB] (21 MBps) [2024-12-13T18:27:07.876Z] Copying: 541/1024 [MB] (19 MBps) [2024-12-13T18:27:08.819Z] Copying: 572/1024 [MB] (31 MBps) [2024-12-13T18:27:09.763Z] Copying: 593/1024 [MB] (21 MBps) [2024-12-13T18:27:11.150Z] Copying: 611/1024 [MB] (17 MBps) [2024-12-13T18:27:12.097Z] Copying: 627/1024 [MB] (16 MBps) [2024-12-13T18:27:13.039Z] Copying: 642/1024 [MB] (15 MBps) [2024-12-13T18:27:13.981Z] Copying: 665/1024 [MB] (22 MBps) [2024-12-13T18:27:15.001Z] Copying: 688/1024 [MB] (23 MBps) [2024-12-13T18:27:15.944Z] Copying: 706/1024 [MB] (17 MBps) [2024-12-13T18:27:16.887Z] Copying: 725/1024 [MB] (19 MBps) [2024-12-13T18:27:17.830Z] Copying: 744/1024 [MB] (18 MBps) [2024-12-13T18:27:18.774Z] Copying: 777/1024 [MB] (32 MBps) [2024-12-13T18:27:20.162Z] Copying: 791/1024 [MB] (14 MBps) [2024-12-13T18:27:21.105Z] Copying: 809/1024 [MB] (17 MBps) [2024-12-13T18:27:22.047Z] Copying: 829/1024 [MB] (19 MBps) [2024-12-13T18:27:22.992Z] Copying: 846/1024 [MB] (17 MBps) [2024-12-13T18:27:23.933Z] Copying: 865/1024 [MB] (18 MBps) [2024-12-13T18:27:24.875Z] Copying: 882/1024 [MB] (17 MBps) [2024-12-13T18:27:25.819Z] Copying: 898/1024 [MB] (15 MBps) [2024-12-13T18:27:26.763Z] Copying: 912/1024 [MB] (14 MBps) [2024-12-13T18:27:28.150Z] Copying: 927/1024 [MB] (15 MBps) [2024-12-13T18:27:29.094Z] Copying: 944/1024 [MB] (17 MBps) [2024-12-13T18:27:30.038Z] Copying: 961/1024 [MB] (16 MBps) [2024-12-13T18:27:30.982Z] Copying: 977/1024 [MB] (16 MBps) [2024-12-13T18:27:31.925Z] Copying: 993/1024 [MB] (15 MBps) [2024-12-13T18:27:32.868Z] Copying: 1012/1024 [MB] (19 MBps) [2024-12-13T18:27:33.442Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-13T18:27:33.442Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-13 18:27:33.362759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.065 [2024-12-13 18:27:33.362965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:59.065 [2024-12-13 18:27:33.363060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:59.065 [2024-12-13 18:27:33.363087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.065 [2024-12-13 18:27:33.366598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:59.065 [2024-12-13 18:27:33.369225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.065 [2024-12-13 18:27:33.369391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:59.065 [2024-12-13 18:27:33.369466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.411 ms 00:32:59.065 [2024-12-13 18:27:33.369491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.065 [2024-12-13 18:27:33.381735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.065 [2024-12-13 18:27:33.381924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:59.065 [2024-12-13 18:27:33.381948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.345 ms 00:32:59.065 [2024-12-13 18:27:33.381959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.065 [2024-12-13 18:27:33.382009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.065 [2024-12-13 18:27:33.382019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:59.065 [2024-12-13 18:27:33.382029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:59.065 [2024-12-13 18:27:33.382037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.065 [2024-12-13 18:27:33.382096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.065 [2024-12-13 18:27:33.382109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:59.065 [2024-12-13 18:27:33.382118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:59.065 [2024-12-13 18:27:33.382126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.065 [2024-12-13 18:27:33.382139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:59.065 [2024-12-13 18:27:33.382151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125696 / 261120 wr_cnt: 1 state: open 00:32:59.065 [2024-12-13 18:27:33.382162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:59.065 [2024-12-13 18:27:33.382740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.382999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:59.066 [2024-12-13 18:27:33.383014] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:59.066 [2024-12-13 18:27:33.383031] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1f80ed3-7057-4d48-a05a-a50f61545960 00:32:59.066 [2024-12-13 18:27:33.383043] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125696 00:32:59.066 [2024-12-13 18:27:33.383050] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125728 00:32:59.066 [2024-12-13 18:27:33.383058] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125696 00:32:59.066 [2024-12-13 18:27:33.383066] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:32:59.066 [2024-12-13 18:27:33.383076] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:59.066 [2024-12-13 18:27:33.383086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:59.066 [2024-12-13 18:27:33.383094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:59.066 [2024-12-13 18:27:33.383101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:59.066 [2024-12-13 18:27:33.383108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:59.066 [2024-12-13 18:27:33.383115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.066 [2024-12-13 18:27:33.383123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:59.066 [2024-12-13 18:27:33.383133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:32:59.066 [2024-12-13 18:27:33.383140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.385444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.066 [2024-12-13 18:27:33.385476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:59.066 [2024-12-13 18:27:33.385493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.288 ms 00:32:59.066 [2024-12-13 18:27:33.385501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.385652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.066 [2024-12-13 18:27:33.385662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:59.066 [2024-12-13 18:27:33.385675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:32:59.066 [2024-12-13 18:27:33.385683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.392893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.393072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:59.066 [2024-12-13 18:27:33.393091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.393110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.393173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.393183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:59.066 [2024-12-13 18:27:33.393191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.393199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.393276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.393289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:59.066 [2024-12-13 18:27:33.393300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.393308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.393332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.393340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:59.066 [2024-12-13 18:27:33.393348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.393356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.406915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.406978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:59.066 [2024-12-13 18:27:33.406996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.407004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.418623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.418822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:59.066 [2024-12-13 18:27:33.418842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.418852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.418907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.418916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:59.066 [2024-12-13 18:27:33.418926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.418943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.418980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.418989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:59.066 [2024-12-13 18:27:33.419005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.419014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.419080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.419090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:59.066 [2024-12-13 18:27:33.419100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.419112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.419142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.419151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:59.066 [2024-12-13 18:27:33.419159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.419167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.419213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.419222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:59.066 [2024-12-13 18:27:33.419231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.419271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.419330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.066 [2024-12-13 18:27:33.419342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:59.066 [2024-12-13 18:27:33.419353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.066 [2024-12-13 18:27:33.419362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.066 [2024-12-13 18:27:33.419505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 57.239 ms, result 0 00:33:00.453 00:33:00.453 00:33:00.453 18:27:34 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:00.453 [2024-12-13 18:27:34.572857] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:00.453 [2024-12-13 18:27:34.573032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98247 ] 00:33:00.453 [2024-12-13 18:27:34.722857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.453 [2024-12-13 18:27:34.750763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.715 [2024-12-13 18:27:34.869142] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:00.715 [2024-12-13 18:27:34.869461] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:00.715 [2024-12-13 18:27:35.031403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.031601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:00.715 [2024-12-13 18:27:35.031626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:00.715 [2024-12-13 18:27:35.031636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.031715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.031730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:00.715 [2024-12-13 18:27:35.031740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:00.715 [2024-12-13 18:27:35.031756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.031787] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:00.715 [2024-12-13 18:27:35.032049] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:00.715 [2024-12-13 18:27:35.032066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.032075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:00.715 [2024-12-13 18:27:35.032087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:33:00.715 [2024-12-13 18:27:35.032095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.032433] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:00.715 [2024-12-13 18:27:35.032460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.032475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:00.715 [2024-12-13 18:27:35.032485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:00.715 [2024-12-13 18:27:35.032498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.032556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.032566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:00.715 [2024-12-13 18:27:35.032574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:00.715 [2024-12-13 18:27:35.032582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.032832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.032843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:00.715 [2024-12-13 18:27:35.032853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:33:00.715 [2024-12-13 18:27:35.032865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.032954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.032964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:00.715 [2024-12-13 18:27:35.032972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:33:00.715 [2024-12-13 18:27:35.032980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.033008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.033018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:00.715 [2024-12-13 18:27:35.033026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:00.715 [2024-12-13 18:27:35.033034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.033055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:00.715 [2024-12-13 18:27:35.035145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.035367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:00.715 [2024-12-13 18:27:35.035387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:33:00.715 [2024-12-13 18:27:35.035395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.035443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.035452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:00.715 [2024-12-13 18:27:35.035460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:00.715 [2024-12-13 18:27:35.035467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.715 [2024-12-13 18:27:35.035515] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:00.715 [2024-12-13 18:27:35.035549] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:00.715 [2024-12-13 18:27:35.035591] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:00.715 [2024-12-13 18:27:35.035607] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:00.715 [2024-12-13 18:27:35.035712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:00.715 [2024-12-13 18:27:35.035723] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:00.715 [2024-12-13 18:27:35.035734] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:00.715 [2024-12-13 18:27:35.035744] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:00.715 [2024-12-13 18:27:35.035758] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:00.715 [2024-12-13 18:27:35.035766] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:00.715 [2024-12-13 18:27:35.035774] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:00.715 [2024-12-13 18:27:35.035781] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:00.715 [2024-12-13 18:27:35.035788] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:00.715 [2024-12-13 18:27:35.035801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.715 [2024-12-13 18:27:35.035809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:00.715 [2024-12-13 18:27:35.035816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:33:00.716 [2024-12-13 18:27:35.035824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.716 [2024-12-13 18:27:35.035910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.716 [2024-12-13 18:27:35.035923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:00.716 [2024-12-13 18:27:35.035938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:00.716 [2024-12-13 18:27:35.035946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.716 [2024-12-13 18:27:35.036052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:00.716 [2024-12-13 18:27:35.036063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:00.716 [2024-12-13 18:27:35.036079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:00.716 [2024-12-13 18:27:35.036105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:00.716 [2024-12-13 18:27:35.036130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:00.716 [2024-12-13 18:27:35.036146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:00.716 [2024-12-13 18:27:35.036156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:00.716 [2024-12-13 18:27:35.036164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:00.716 [2024-12-13 18:27:35.036172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:00.716 [2024-12-13 18:27:35.036181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:00.716 [2024-12-13 18:27:35.036190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:00.716 [2024-12-13 18:27:35.036206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:00.716 [2024-12-13 18:27:35.036229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:00.716 [2024-12-13 18:27:35.036270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:00.716 [2024-12-13 18:27:35.036294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:00.716 [2024-12-13 18:27:35.036317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:00.716 [2024-12-13 18:27:35.036337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:00.716 [2024-12-13 18:27:35.036351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:00.716 [2024-12-13 18:27:35.036359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:00.716 [2024-12-13 18:27:35.036365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:00.716 [2024-12-13 18:27:35.036372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:00.716 [2024-12-13 18:27:35.036378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:00.716 [2024-12-13 18:27:35.036385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:00.716 [2024-12-13 18:27:35.036398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:00.716 [2024-12-13 18:27:35.036404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036413] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:00.716 [2024-12-13 18:27:35.036421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:00.716 [2024-12-13 18:27:35.036428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.716 [2024-12-13 18:27:35.036447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:00.716 [2024-12-13 18:27:35.036454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:00.716 [2024-12-13 18:27:35.036461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:00.716 [2024-12-13 18:27:35.036468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:00.716 [2024-12-13 18:27:35.036474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:00.716 [2024-12-13 18:27:35.036481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:00.716 [2024-12-13 18:27:35.036489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:00.716 [2024-12-13 18:27:35.036502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:00.716 [2024-12-13 18:27:35.036511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:00.716 [2024-12-13 18:27:35.036518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:00.716 [2024-12-13 18:27:35.036525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:00.716 [2024-12-13 18:27:35.036532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:00.716 [2024-12-13 18:27:35.036541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:00.716 [2024-12-13 18:27:35.036549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:00.716 [2024-12-13 18:27:35.036556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:00.716 [2024-12-13 18:27:35.036563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:00.716 [2024-12-13 18:27:35.036569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:00.716 [2024-12-13 18:27:35.036577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:00.716 [2024-12-13 18:27:35.036583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:00.716 [2024-12-13 18:27:35.036597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:00.716 [2024-12-13 18:27:35.036604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:00.716 [2024-12-13 18:27:35.036612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:00.716 [2024-12-13 18:27:35.036619] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:00.716 [2024-12-13 18:27:35.036627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:00.716 [2024-12-13 18:27:35.036636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:00.716 [2024-12-13 18:27:35.036643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:00.716 [2024-12-13 18:27:35.036650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:00.716 [2024-12-13 18:27:35.036657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:00.716 [2024-12-13 18:27:35.036667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.716 [2024-12-13 18:27:35.036674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:00.716 [2024-12-13 18:27:35.036683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:33:00.716 [2024-12-13 18:27:35.036690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.716 [2024-12-13 18:27:35.046930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.716 [2024-12-13 18:27:35.047111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:00.716 [2024-12-13 18:27:35.047129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.198 ms 00:33:00.716 [2024-12-13 18:27:35.047138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.716 [2024-12-13 18:27:35.047221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.716 [2024-12-13 18:27:35.047235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:00.716 [2024-12-13 18:27:35.047274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:00.716 [2024-12-13 18:27:35.047286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.716 [2024-12-13 18:27:35.069034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.717 [2024-12-13 18:27:35.069104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:00.717 [2024-12-13 18:27:35.069124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.691 ms 00:33:00.717 [2024-12-13 18:27:35.069138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.717 [2024-12-13 18:27:35.069201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.717 [2024-12-13 18:27:35.069216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:00.717 [2024-12-13 18:27:35.069230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:00.717 [2024-12-13 18:27:35.069270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.717 [2024-12-13 18:27:35.069451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.717 [2024-12-13 18:27:35.069474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:00.717 [2024-12-13 18:27:35.069488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:33:00.717 [2024-12-13 18:27:35.069500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.717 [2024-12-13 18:27:35.069705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.717 [2024-12-13 18:27:35.069722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:00.717 [2024-12-13 18:27:35.069737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:33:00.717 [2024-12-13 18:27:35.069757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.717 [2024-12-13 18:27:35.079166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.717 [2024-12-13 18:27:35.079212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:00.717 [2024-12-13 18:27:35.079230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.379 ms 00:33:00.717 [2024-12-13 18:27:35.079266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.717 [2024-12-13 18:27:35.079397] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:00.717 [2024-12-13 18:27:35.079417] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:00.717 [2024-12-13 18:27:35.079427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.717 [2024-12-13 18:27:35.079436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:00.717 [2024-12-13 18:27:35.079446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:00.717 [2024-12-13 18:27:35.079456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.091999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.092051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:00.978 [2024-12-13 18:27:35.092067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.526 ms 00:33:00.978 [2024-12-13 18:27:35.092079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.092208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.092218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:00.978 [2024-12-13 18:27:35.092226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:00.978 [2024-12-13 18:27:35.092238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.092315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.092336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:00.978 [2024-12-13 18:27:35.092344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:00.978 [2024-12-13 18:27:35.092352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.092676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.092688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:00.978 [2024-12-13 18:27:35.092701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:33:00.978 [2024-12-13 18:27:35.092709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.092724] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:00.978 [2024-12-13 18:27:35.092733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.092744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:00.978 [2024-12-13 18:27:35.092753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:00.978 [2024-12-13 18:27:35.092760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.101931] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:00.978 [2024-12-13 18:27:35.102091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.102102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:00.978 [2024-12-13 18:27:35.102112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.314 ms 00:33:00.978 [2024-12-13 18:27:35.102120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.104613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.104646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:00.978 [2024-12-13 18:27:35.104656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:33:00.978 [2024-12-13 18:27:35.104663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.104740] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:00.978 [2024-12-13 18:27:35.105354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.105371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:00.978 [2024-12-13 18:27:35.105381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:33:00.978 [2024-12-13 18:27:35.105392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.105419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.105428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:00.978 [2024-12-13 18:27:35.105436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:00.978 [2024-12-13 18:27:35.105443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.105481] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:00.978 [2024-12-13 18:27:35.105494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.105501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:00.978 [2024-12-13 18:27:35.105509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:00.978 [2024-12-13 18:27:35.105517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.111677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.111852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:00.978 [2024-12-13 18:27:35.111870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.139 ms 00:33:00.978 [2024-12-13 18:27:35.111878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.111951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.978 [2024-12-13 18:27:35.111961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:00.978 [2024-12-13 18:27:35.111969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:00.978 [2024-12-13 18:27:35.111976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.978 [2024-12-13 18:27:35.113226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.315 ms, result 0 00:33:02.366  [2024-12-13T18:27:37.315Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-13T18:27:38.697Z] Copying: 22/1024 [MB] (10 MBps) [2024-12-13T18:27:39.642Z] Copying: 37/1024 [MB] (15 MBps) [2024-12-13T18:27:40.620Z] Copying: 59/1024 [MB] (21 MBps) [2024-12-13T18:27:41.563Z] Copying: 74/1024 [MB] (14 MBps) [2024-12-13T18:27:42.509Z] Copying: 101/1024 [MB] (26 MBps) [2024-12-13T18:27:43.453Z] Copying: 115/1024 [MB] (14 MBps) [2024-12-13T18:27:44.397Z] Copying: 134/1024 [MB] (18 MBps) [2024-12-13T18:27:45.341Z] Copying: 153/1024 [MB] (19 MBps) [2024-12-13T18:27:46.727Z] Copying: 174/1024 [MB] (20 MBps) [2024-12-13T18:27:47.672Z] Copying: 193/1024 [MB] (19 MBps) [2024-12-13T18:27:48.624Z] Copying: 210/1024 [MB] (17 MBps) [2024-12-13T18:27:49.568Z] Copying: 233/1024 [MB] (22 MBps) [2024-12-13T18:27:50.512Z] Copying: 259/1024 [MB] (26 MBps) [2024-12-13T18:27:51.456Z] Copying: 283/1024 [MB] (24 MBps) [2024-12-13T18:27:52.400Z] Copying: 302/1024 [MB] (18 MBps) [2024-12-13T18:27:53.344Z] Copying: 321/1024 [MB] (19 MBps) [2024-12-13T18:27:54.731Z] Copying: 340/1024 [MB] (18 MBps) [2024-12-13T18:27:55.680Z] Copying: 356/1024 [MB] (15 MBps) [2024-12-13T18:27:56.623Z] Copying: 372/1024 [MB] (16 MBps) [2024-12-13T18:27:57.567Z] Copying: 386/1024 [MB] (14 MBps) [2024-12-13T18:27:58.509Z] Copying: 396/1024 [MB] (10 MBps) [2024-12-13T18:27:59.452Z] Copying: 408/1024 [MB] (11 MBps) [2024-12-13T18:28:00.395Z] Copying: 426/1024 [MB] (18 MBps) [2024-12-13T18:28:01.335Z] Copying: 438/1024 [MB] (12 MBps) [2024-12-13T18:28:02.718Z] Copying: 453/1024 [MB] (14 MBps) [2024-12-13T18:28:03.659Z] Copying: 467/1024 [MB] (13 MBps) [2024-12-13T18:28:04.599Z] Copying: 484/1024 [MB] (16 MBps) [2024-12-13T18:28:05.539Z] Copying: 499/1024 [MB] (14 MBps) [2024-12-13T18:28:06.547Z] Copying: 521/1024 [MB] (22 MBps) [2024-12-13T18:28:07.488Z] Copying: 541/1024 [MB] (19 MBps) [2024-12-13T18:28:08.429Z] Copying: 558/1024 [MB] (17 MBps) [2024-12-13T18:28:09.369Z] Copying: 573/1024 [MB] (15 MBps) [2024-12-13T18:28:10.752Z] Copying: 591/1024 [MB] (18 MBps) [2024-12-13T18:28:11.324Z] Copying: 613/1024 [MB] (21 MBps) [2024-12-13T18:28:12.707Z] Copying: 630/1024 [MB] (17 MBps) [2024-12-13T18:28:13.650Z] Copying: 649/1024 [MB] (18 MBps) [2024-12-13T18:28:14.595Z] Copying: 669/1024 [MB] (19 MBps) [2024-12-13T18:28:15.540Z] Copying: 682/1024 [MB] (13 MBps) [2024-12-13T18:28:16.483Z] Copying: 701/1024 [MB] (18 MBps) [2024-12-13T18:28:17.428Z] Copying: 714/1024 [MB] (13 MBps) [2024-12-13T18:28:18.372Z] Copying: 729/1024 [MB] (14 MBps) [2024-12-13T18:28:19.317Z] Copying: 743/1024 [MB] (14 MBps) [2024-12-13T18:28:20.705Z] Copying: 754/1024 [MB] (11 MBps) [2024-12-13T18:28:21.649Z] Copying: 766/1024 [MB] (11 MBps) [2024-12-13T18:28:22.593Z] Copying: 777/1024 [MB] (11 MBps) [2024-12-13T18:28:23.538Z] Copying: 788/1024 [MB] (10 MBps) [2024-12-13T18:28:24.483Z] Copying: 798/1024 [MB] (10 MBps) [2024-12-13T18:28:25.427Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-13T18:28:26.370Z] Copying: 819/1024 [MB] (10 MBps) [2024-12-13T18:28:27.758Z] Copying: 837/1024 [MB] (17 MBps) [2024-12-13T18:28:28.331Z] Copying: 855/1024 [MB] (17 MBps) [2024-12-13T18:28:29.719Z] Copying: 869/1024 [MB] (14 MBps) [2024-12-13T18:28:30.662Z] Copying: 892/1024 [MB] (22 MBps) [2024-12-13T18:28:31.607Z] Copying: 913/1024 [MB] (20 MBps) [2024-12-13T18:28:32.582Z] Copying: 935/1024 [MB] (21 MBps) [2024-12-13T18:28:33.533Z] Copying: 950/1024 [MB] (15 MBps) [2024-12-13T18:28:34.476Z] Copying: 966/1024 [MB] (15 MBps) [2024-12-13T18:28:35.419Z] Copying: 983/1024 [MB] (17 MBps) [2024-12-13T18:28:36.361Z] Copying: 997/1024 [MB] (13 MBps) [2024-12-13T18:28:36.934Z] Copying: 1017/1024 [MB] (20 MBps) [2024-12-13T18:28:37.196Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-13 18:28:37.126391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.819 [2024-12-13 18:28:37.126477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:02.819 [2024-12-13 18:28:37.126494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:02.819 [2024-12-13 18:28:37.126503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.819 [2024-12-13 18:28:37.126528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:02.819 [2024-12-13 18:28:37.127373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.819 [2024-12-13 18:28:37.127402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:02.819 [2024-12-13 18:28:37.127415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:34:02.819 [2024-12-13 18:28:37.127433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.819 [2024-12-13 18:28:37.127793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.819 [2024-12-13 18:28:37.127817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:02.819 [2024-12-13 18:28:37.127827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:34:02.819 [2024-12-13 18:28:37.127835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.819 [2024-12-13 18:28:37.127868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.819 [2024-12-13 18:28:37.127878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:02.819 [2024-12-13 18:28:37.127887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:02.819 [2024-12-13 18:28:37.127895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.819 [2024-12-13 18:28:37.127958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.819 [2024-12-13 18:28:37.128042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:02.819 [2024-12-13 18:28:37.128056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:02.819 [2024-12-13 18:28:37.128064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.819 [2024-12-13 18:28:37.128080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:02.819 [2024-12-13 18:28:37.128094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:02.819 [2024-12-13 18:28:37.128116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:02.819 [2024-12-13 18:28:37.128207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.128998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.129006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.129014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:02.820 [2024-12-13 18:28:37.129029] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:02.820 [2024-12-13 18:28:37.129038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1f80ed3-7057-4d48-a05a-a50f61545960 00:34:02.820 [2024-12-13 18:28:37.129047] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:02.820 [2024-12-13 18:28:37.129055] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5408 00:34:02.820 [2024-12-13 18:28:37.129062] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5376 00:34:02.821 [2024-12-13 18:28:37.129073] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0060 00:34:02.821 [2024-12-13 18:28:37.129081] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:02.821 [2024-12-13 18:28:37.129093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:02.821 [2024-12-13 18:28:37.129100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:02.821 [2024-12-13 18:28:37.129106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:02.821 [2024-12-13 18:28:37.129113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:02.821 [2024-12-13 18:28:37.129120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.821 [2024-12-13 18:28:37.129128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:02.821 [2024-12-13 18:28:37.129136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:34:02.821 [2024-12-13 18:28:37.129143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.132058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.821 [2024-12-13 18:28:37.132128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:02.821 [2024-12-13 18:28:37.132149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:34:02.821 [2024-12-13 18:28:37.132158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.132323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.821 [2024-12-13 18:28:37.132334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:02.821 [2024-12-13 18:28:37.132344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:34:02.821 [2024-12-13 18:28:37.132352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.140872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.140928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:02.821 [2024-12-13 18:28:37.140939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.140946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.141014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.141029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:02.821 [2024-12-13 18:28:37.141037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.141045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.141115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.141128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:02.821 [2024-12-13 18:28:37.141137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.141148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.141165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.141178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:02.821 [2024-12-13 18:28:37.141186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.141194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.155447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.155664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:02.821 [2024-12-13 18:28:37.155693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.155702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.166734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.166911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:02.821 [2024-12-13 18:28:37.166929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.166939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.166993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.167003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:02.821 [2024-12-13 18:28:37.167020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.167029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.167064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.167073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:02.821 [2024-12-13 18:28:37.167081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.167089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.167146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.167156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:02.821 [2024-12-13 18:28:37.167170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.167185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.167213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.167223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:02.821 [2024-12-13 18:28:37.167231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.167262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.167301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.167310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:02.821 [2024-12-13 18:28:37.167318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.167329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.167372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:02.821 [2024-12-13 18:28:37.167385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:02.821 [2024-12-13 18:28:37.167394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:02.821 [2024-12-13 18:28:37.167402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.821 [2024-12-13 18:28:37.167535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.238 ms, result 0 00:34:03.083 00:34:03.083 00:34:03.083 18:28:37 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:05.627 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:05.627 Process with pid 96241 is not found 00:34:05.627 Remove shared memory files 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96241 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96241 ']' 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96241 00:34:05.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96241) - No such process 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 96241 is not found' 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_band_md /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_l2p_l1 /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_l2p_l2 /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_l2p_l2_ctx /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_nvc_md /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_p2l_pool /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_sb /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_sb_shm /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_trim_bitmap /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_trim_log /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_trim_md /dev/hugepages/ftl_b1f80ed3-7057-4d48-a05a-a50f61545960_vmap 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:05.627 00:34:05.627 real 4m22.577s 00:34:05.627 user 4m11.019s 00:34:05.627 sys 0m11.249s 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.627 18:28:39 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:05.627 ************************************ 00:34:05.627 END TEST ftl_restore_fast 00:34:05.627 ************************************ 00:34:05.627 Process with pid 87725 is not found 00:34:05.627 18:28:39 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:05.627 18:28:39 ftl -- ftl/ftl.sh@14 -- # killprocess 87725 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@954 -- # '[' -z 87725 ']' 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@958 -- # kill -0 87725 00:34:05.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87725) - No such process 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87725 is not found' 00:34:05.627 18:28:39 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:05.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.627 18:28:39 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98917 00:34:05.627 18:28:39 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98917 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@835 -- # '[' -z 98917 ']' 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.627 18:28:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:05.627 18:28:39 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:05.627 [2024-12-13 18:28:39.751951] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:05.628 [2024-12-13 18:28:39.752072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98917 ] 00:34:05.628 [2024-12-13 18:28:39.896983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.628 [2024-12-13 18:28:39.923676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.570 18:28:40 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.570 18:28:40 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:06.570 18:28:40 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:06.570 nvme0n1 00:34:06.570 18:28:40 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:06.570 18:28:40 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:06.570 18:28:40 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:06.831 18:28:41 ftl -- ftl/common.sh@28 -- # stores=d1e1815b-8553-4ca3-aed5-bee4aea02d76 00:34:06.831 18:28:41 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:06.831 18:28:41 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1e1815b-8553-4ca3-aed5-bee4aea02d76 00:34:07.093 18:28:41 ftl -- ftl/ftl.sh@23 -- # killprocess 98917 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 98917 ']' 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@958 -- # kill -0 98917 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@959 -- # uname 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98917 00:34:07.093 killing process with pid 98917 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98917' 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@973 -- # kill 98917 00:34:07.093 18:28:41 ftl -- common/autotest_common.sh@978 -- # wait 98917 00:34:07.354 18:28:41 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:07.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:07.615 Waiting for block devices as requested 00:34:07.615 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:07.876 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:07.876 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:07.876 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:13.166 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:13.166 Remove shared memory files 00:34:13.166 18:28:47 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:13.166 18:28:47 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:13.166 18:28:47 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:13.166 18:28:47 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:13.166 18:28:47 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:13.166 18:28:47 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:13.166 18:28:47 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:13.166 ************************************ 00:34:13.166 END TEST ftl 00:34:13.166 ************************************ 00:34:13.166 00:34:13.166 real 16m53.784s 00:34:13.166 user 18m53.036s 00:34:13.166 sys 1m24.013s 00:34:13.166 18:28:47 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:13.166 18:28:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:13.166 18:28:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:13.166 18:28:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:13.166 18:28:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:13.166 18:28:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:13.166 18:28:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:13.166 18:28:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:13.166 18:28:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:13.166 18:28:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:13.166 18:28:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:13.166 18:28:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:13.166 18:28:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.166 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:34:13.166 18:28:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:13.166 18:28:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:13.166 18:28:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:13.166 18:28:47 -- common/autotest_common.sh@10 -- # set +x 00:34:14.552 INFO: APP EXITING 00:34:14.552 INFO: killing all VMs 00:34:14.552 INFO: killing vhost app 00:34:14.552 INFO: EXIT DONE 00:34:14.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:15.386 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:15.386 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:15.386 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:15.386 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:15.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:16.220 Cleaning 00:34:16.220 Removing: /var/run/dpdk/spdk0/config 00:34:16.220 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:16.220 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:16.220 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:16.220 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:16.220 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:16.220 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:16.220 Removing: /var/run/dpdk/spdk0 00:34:16.220 Removing: /var/run/dpdk/spdk_pid70772 00:34:16.220 Removing: /var/run/dpdk/spdk_pid70930 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71132 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71219 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71242 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71354 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71366 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71549 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71622 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71702 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71796 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71877 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71916 00:34:16.220 Removing: /var/run/dpdk/spdk_pid71947 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72018 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72091 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72516 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72558 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72604 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72615 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72673 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72689 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72747 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72763 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72805 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72823 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72865 00:34:16.220 Removing: /var/run/dpdk/spdk_pid72883 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73010 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73041 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73119 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73280 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73353 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73384 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73795 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73882 00:34:16.220 Removing: /var/run/dpdk/spdk_pid73988 00:34:16.220 Removing: /var/run/dpdk/spdk_pid74019 00:34:16.220 Removing: /var/run/dpdk/spdk_pid74050 00:34:16.220 Removing: /var/run/dpdk/spdk_pid74123 00:34:16.220 Removing: /var/run/dpdk/spdk_pid74730 00:34:16.220 Removing: /var/run/dpdk/spdk_pid74761 00:34:16.220 Removing: /var/run/dpdk/spdk_pid75220 00:34:16.220 Removing: /var/run/dpdk/spdk_pid75307 00:34:16.220 Removing: /var/run/dpdk/spdk_pid75405 00:34:16.220 Removing: /var/run/dpdk/spdk_pid75436 00:34:16.220 Removing: /var/run/dpdk/spdk_pid75467 00:34:16.220 Removing: /var/run/dpdk/spdk_pid75487 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77298 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77424 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77428 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77440 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77479 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77483 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77495 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77534 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77538 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77550 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77589 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77593 00:34:16.220 Removing: /var/run/dpdk/spdk_pid77605 00:34:16.220 Removing: /var/run/dpdk/spdk_pid78981 00:34:16.220 Removing: /var/run/dpdk/spdk_pid79067 00:34:16.220 Removing: /var/run/dpdk/spdk_pid80458 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82185 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82237 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82307 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82405 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82486 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82576 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82634 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82698 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82797 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82877 00:34:16.220 Removing: /var/run/dpdk/spdk_pid82962 00:34:16.220 Removing: /var/run/dpdk/spdk_pid83025 00:34:16.220 Removing: /var/run/dpdk/spdk_pid83089 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83188 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83268 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83353 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83415 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83480 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83573 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83655 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83745 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83803 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83866 00:34:16.221 Removing: /var/run/dpdk/spdk_pid83929 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84000 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84092 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84177 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84261 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84324 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84387 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84450 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84519 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84611 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84698 00:34:16.221 Removing: /var/run/dpdk/spdk_pid84837 00:34:16.221 Removing: /var/run/dpdk/spdk_pid85104 00:34:16.221 Removing: /var/run/dpdk/spdk_pid85135 00:34:16.221 Removing: /var/run/dpdk/spdk_pid85577 00:34:16.221 Removing: /var/run/dpdk/spdk_pid85751 00:34:16.221 Removing: /var/run/dpdk/spdk_pid85840 00:34:16.221 Removing: /var/run/dpdk/spdk_pid85957 00:34:16.221 Removing: /var/run/dpdk/spdk_pid85994 00:34:16.221 Removing: /var/run/dpdk/spdk_pid86019 00:34:16.221 Removing: /var/run/dpdk/spdk_pid86314 00:34:16.221 Removing: /var/run/dpdk/spdk_pid86352 00:34:16.221 Removing: /var/run/dpdk/spdk_pid86408 00:34:16.221 Removing: /var/run/dpdk/spdk_pid86771 00:34:16.482 Removing: /var/run/dpdk/spdk_pid86916 00:34:16.482 Removing: /var/run/dpdk/spdk_pid87725 00:34:16.482 Removing: /var/run/dpdk/spdk_pid87842 00:34:16.482 Removing: /var/run/dpdk/spdk_pid87990 00:34:16.482 Removing: /var/run/dpdk/spdk_pid88086 00:34:16.482 Removing: /var/run/dpdk/spdk_pid88407 00:34:16.482 Removing: /var/run/dpdk/spdk_pid88655 00:34:16.482 Removing: /var/run/dpdk/spdk_pid88996 00:34:16.482 Removing: /var/run/dpdk/spdk_pid89161 00:34:16.482 Removing: /var/run/dpdk/spdk_pid89313 00:34:16.482 Removing: /var/run/dpdk/spdk_pid89350 00:34:16.482 Removing: /var/run/dpdk/spdk_pid89505 00:34:16.482 Removing: /var/run/dpdk/spdk_pid89525 00:34:16.482 Removing: /var/run/dpdk/spdk_pid89561 00:34:16.482 Removing: /var/run/dpdk/spdk_pid89804 00:34:16.482 Removing: /var/run/dpdk/spdk_pid90023 00:34:16.482 Removing: /var/run/dpdk/spdk_pid90599 00:34:16.482 Removing: /var/run/dpdk/spdk_pid91283 00:34:16.482 Removing: /var/run/dpdk/spdk_pid91867 00:34:16.482 Removing: /var/run/dpdk/spdk_pid92642 00:34:16.482 Removing: /var/run/dpdk/spdk_pid92773 00:34:16.482 Removing: /var/run/dpdk/spdk_pid92850 00:34:16.482 Removing: /var/run/dpdk/spdk_pid93390 00:34:16.482 Removing: /var/run/dpdk/spdk_pid93439 00:34:16.482 Removing: /var/run/dpdk/spdk_pid93994 00:34:16.482 Removing: /var/run/dpdk/spdk_pid94553 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95258 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95386 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95413 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95466 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95517 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95577 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95790 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95860 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95920 00:34:16.482 Removing: /var/run/dpdk/spdk_pid95976 00:34:16.482 Removing: /var/run/dpdk/spdk_pid96011 00:34:16.482 Removing: /var/run/dpdk/spdk_pid96072 00:34:16.482 Removing: /var/run/dpdk/spdk_pid96241 00:34:16.482 Removing: /var/run/dpdk/spdk_pid96444 00:34:16.482 Removing: /var/run/dpdk/spdk_pid97011 00:34:16.482 Removing: /var/run/dpdk/spdk_pid97712 00:34:16.482 Removing: /var/run/dpdk/spdk_pid98247 00:34:16.482 Removing: /var/run/dpdk/spdk_pid98917 00:34:16.482 Clean 00:34:16.482 18:28:50 -- common/autotest_common.sh@1453 -- # return 0 00:34:16.482 18:28:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:16.482 18:28:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.482 18:28:50 -- common/autotest_common.sh@10 -- # set +x 00:34:16.482 18:28:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:16.482 18:28:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.482 18:28:50 -- common/autotest_common.sh@10 -- # set +x 00:34:16.482 18:28:50 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:16.744 18:28:50 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:16.744 18:28:50 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:16.744 18:28:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:16.744 18:28:50 -- spdk/autotest.sh@398 -- # hostname 00:34:16.744 18:28:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:16.744 geninfo: WARNING: invalid characters removed from testname! 00:34:43.336 18:29:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:44.721 18:29:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:46.636 18:29:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:48.622 18:29:22 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:50.537 18:29:24 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:52.450 18:29:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:55.754 18:29:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:55.754 18:29:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:55.754 18:29:29 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:55.754 18:29:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:55.754 18:29:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:55.754 18:29:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:55.754 + [[ -n 5775 ]] 00:34:55.754 + sudo kill 5775 00:34:55.764 [Pipeline] } 00:34:55.779 [Pipeline] // timeout 00:34:55.784 [Pipeline] } 00:34:55.798 [Pipeline] // stage 00:34:55.803 [Pipeline] } 00:34:55.816 [Pipeline] // catchError 00:34:55.825 [Pipeline] stage 00:34:55.827 [Pipeline] { (Stop VM) 00:34:55.839 [Pipeline] sh 00:34:56.123 + vagrant halt 00:34:58.661 ==> default: Halting domain... 00:35:03.961 [Pipeline] sh 00:35:04.247 + vagrant destroy -f 00:35:06.794 ==> default: Removing domain... 00:35:07.381 [Pipeline] sh 00:35:07.665 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:07.676 [Pipeline] } 00:35:07.691 [Pipeline] // stage 00:35:07.696 [Pipeline] } 00:35:07.710 [Pipeline] // dir 00:35:07.716 [Pipeline] } 00:35:07.730 [Pipeline] // wrap 00:35:07.736 [Pipeline] } 00:35:07.749 [Pipeline] // catchError 00:35:07.758 [Pipeline] stage 00:35:07.760 [Pipeline] { (Epilogue) 00:35:07.773 [Pipeline] sh 00:35:08.061 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:13.345 [Pipeline] catchError 00:35:13.347 [Pipeline] { 00:35:13.359 [Pipeline] sh 00:35:13.643 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:13.904 Artifacts sizes are good 00:35:13.915 [Pipeline] } 00:35:13.929 [Pipeline] // catchError 00:35:13.939 [Pipeline] archiveArtifacts 00:35:13.982 Archiving artifacts 00:35:14.085 [Pipeline] cleanWs 00:35:14.098 [WS-CLEANUP] Deleting project workspace... 00:35:14.098 [WS-CLEANUP] Deferred wipeout is used... 00:35:14.105 [WS-CLEANUP] done 00:35:14.107 [Pipeline] } 00:35:14.120 [Pipeline] // stage 00:35:14.125 [Pipeline] } 00:35:14.139 [Pipeline] // node 00:35:14.144 [Pipeline] End of Pipeline 00:35:14.188 Finished: SUCCESS